celery worker crash when i start another - django

i use django with celery and redis.
I would like to have three queues und three workers.
My celery settings in the settings.py looks like this:
CELERY_BROKER_URL = 'redis://localhost:6379'
CELERY_RESULT_BACKEND = 'redis://localhost:6379'
CELERY_ACCEPT_CONTENT = ['application/json']
CELERY_RESULT_SERIALIZER = 'json'
CELERY_TASK_SERIALIZER = 'json'
CELERY_TIMEZONE = 'Europe/Berlin'
# CELERY QUEUES SETUP
CELERY_DEFAULT_QUEUE = 'default'
CELERY_DEFAULT_ROUTING_KEY = 'default'
CELERY_TASK_QUEUES = (
Queue('default', Exchange('default'), routing_key='default'),
Queue('manually_crawl', Exchange('manually_crawl'), routing_key='manually_crawl'),
Queue('periodically_crawl', Exchange('periodically_crawl'), routing_key='periodically_crawl'),
)
CELERY_ROUTES = {
'api.tasks.crawl_manually': {'queue': 'manually_crawl', 'routing_key': 'manually_crawl',},
'api.tasks.crawl_periodically': {'queue': 'periodically_crawl', 'routing_key': 'periodically_crawl',},
'api.tasks.crawl_firsttime': {'queue': 'default', 'routing_key': 'default',},
}
Later i will start the workers with celery multi, but in the development phase i would like to start the worker manually to see errors or so.
I start the redis server with redis-server and than i start the first worker default with:
celery -A proj worker -Q default -l debug -n default_worker
If i try to start the next worker in a new terminal with:
celery -A proj worker -Q manually_crawl -l debug -n manually_crawl
I get an error in the first default worker terminal:
[2019-10-28 09:32:58,284: INFO/MainProcess] sync with celery#manually_crawl
[2019-10-28 09:32:58,290: ERROR/MainProcess] Control command error: OperationalError("\nCannot route message for exchange 'reply.celery.pidbox': Table empty or key no longer exists.\nProbably the key ('_kombu.binding.reply.celery.pidbox') has been removed from the Redis database.\n")
Traceback (most recent call last):
File "/usr/local/lib/python3.7/dist-packages/kombu/connection.py", line 439, in _reraise_as_library_errors
yield
File "/usr/local/lib/python3.7/dist-packages/kombu/connection.py", line 518, in _ensured
return fun(*args, **kwargs)
File "/usr/local/lib/python3.7/dist-packages/kombu/messaging.py", line 203, in _publish
mandatory=mandatory, immediate=immediate,
File "/usr/local/lib/python3.7/dist-packages/kombu/transport/virtual/base.py", line 605, in basic_publish
message, exchange, routing_key, **kwargs
File "/usr/local/lib/python3.7/dist-packages/kombu/transport/virtual/exchange.py", line 70, in deliver
for queue in _lookup(exchange, routing_key):
File "/usr/local/lib/python3.7/dist-packages/kombu/transport/redis.py", line 877, in _lookup
exchange, redis_key))
kombu.exceptions.InconsistencyError:
Cannot route message for exchange 'reply.celery.pidbox': Table empty or key no longer exists.
Probably the key ('_kombu.binding.reply.celery.pidbox') has been removed from the Redis database.
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/usr/local/lib/python3.7/dist-packages/celery/worker/pidbox.py", line 46, in on_message
self.node.handle_message(body, message)
File "/usr/local/lib/python3.7/dist-packages/kombu/pidbox.py", line 145, in handle_message
return self.dispatch(**body)
File "/usr/local/lib/python3.7/dist-packages/kombu/pidbox.py", line 115, in dispatch
ticket=ticket)
File "/usr/local/lib/python3.7/dist-packages/kombu/pidbox.py", line 151, in reply
serializer=self.mailbox.serializer)
File "/usr/local/lib/python3.7/dist-packages/kombu/pidbox.py", line 285, in _publish_reply
**opts
File "/usr/local/lib/python3.7/dist-packages/kombu/messaging.py", line 181, in publish
exchange_name, declare,
File "/usr/local/lib/python3.7/dist-packages/kombu/connection.py", line 551, in _ensured
errback and errback(exc, 0)
File "/usr/lib/python3.7/contextlib.py", line 130, in __exit__
self.gen.throw(type, value, traceback)
File "/usr/local/lib/python3.7/dist-packages/kombu/connection.py", line 444, in _reraise_as_library_errors
sys.exc_info()[2])
File "/usr/local/lib/python3.7/dist-packages/vine/five.py", line 194, in reraise
raise value.with_traceback(tb)
File "/usr/local/lib/python3.7/dist-packages/kombu/connection.py", line 439, in _reraise_as_library_errors
yield
File "/usr/local/lib/python3.7/dist-packages/kombu/connection.py", line 518, in _ensured
return fun(*args, **kwargs)
File "/usr/local/lib/python3.7/dist-packages/kombu/messaging.py", line 203, in _publish
mandatory=mandatory, immediate=immediate,
File "/usr/local/lib/python3.7/dist-packages/kombu/transport/virtual/base.py", line 605, in basic_publish
message, exchange, routing_key, **kwargs
File "/usr/local/lib/python3.7/dist-packages/kombu/transport/virtual/exchange.py", line 70, in deliver
for queue in _lookup(exchange, routing_key):
File "/usr/local/lib/python3.7/dist-packages/kombu/transport/redis.py", line 877, in _lookup
exchange, redis_key))
kombu.exceptions.OperationalError:
Cannot route message for exchange 'reply.celery.pidbox': Table empty or key no longer exists.
Probably the key ('_kombu.binding.reply.celery.pidbox') has been removed from the Redis database.
Why?

there is a problem currently with the kombu library, according to this post by downgrading to 4.6.4 and for some people 4.6.3 it solves the problem
jorijinnall commented 11 days ago
Had the same issue.
I fixed by downgrading kombu from 4.6.5 to 4.6.3
I still had the bug in version 4.6.4
link github

You can start multiple workers like shown below:
$ celery -A proj worker -l info --concurrency=4 -n wkr1#hostname
$ celery -A proj worker -l info --concurrency=2 -n wkr2#hostname
$ celery -A proj worker -l info --concurrency=2 -n wkr3#hostname
In the above example, there are three workers which will be able to spawn 4,2,2 child processes. It is normally advised to run a single worker per machine and the concurrency value will define how many processes will run in parallel.
The default number of those processes is equal to a number of cores on that machine, normally.
I hope this could help you.

Related

Flask-SocketIO WebSocket transport is not available

I've got a Flask server with Flask-SocketIO to do instant reloads on some pages. Now I'm getting this message in the console:
The WebSocket transport is not available, you must install a WebSocket server that is compatible with your async mode to enable it. See the documentation for details.
Like discribed here https://github.com/miguelgrinberg/Flask-SocketIO/issues/647 I tried to install gevent or eventlet, but neither one changed anything.
The code on the Client is like this:
import { io } from "https://cdn.socket.io/4.4.1/socket.io.esm.min.js";
var socket = io();
socket.emit(event,data,(response) => {response_func(response);})
On the Server:
#socketio.on(event)
def functionOnEvent(data):
functions(data)
return data
Both code samples are of course simplified, but there is no async call of any sort.
Update:
After installing eventlet again and manually setting the async_mode the message disappeared. But now I get the following error from eventlet:
ERROR:Error on request:
Traceback (most recent call last):
File "C:\...\venv\lib\site-packages\werkzeug\serving.py", line 319, in run_wsgi
execute(self.server.app)
File "C:\...\venv\lib\site-packages\werkzeug\serving.py", line 306, in execute
application_iter = app(environ, start_response)
File "C:\...\venv\lib\site-packages\flask\app.py", line 2095, in __call__
return self.wsgi_app(environ, start_response)
File "C:\...\venv\lib\site-packages\flask_socketio\__init__.py", line 43, in __call__
return super(_SocketIOMiddleware, self).__call__(environ,
File "C:\...\venv\lib\site-packages\engineio\middleware.py", line 63, in __call__
return self.engineio_app.handle_request(environ, start_response)
File "C:\...\venv\lib\site-packages\socketio\server.py", line 597, in handle_request
return self.eio.handle_request(environ, start_response)
File "C:\...\venv\lib\site-packages\engineio\server.py", line 411, in handle_request
packets = socket.handle_get_request(
File "C:\...\venv\lib\site-packages\engineio\socket.py", line 103, in handle_get_request
return getattr(self, '_upgrade_' + transport)(environ,
File "C:\...\venv\lib\site-packages\engineio\socket.py", line 158, in _upgrade_websocket
return ws(environ, start_response)
File "C:\...\venv\lib\site-packages\engineio\async_drivers\eventlet.py", line 16, in __call__
raise RuntimeError('You need to use the eventlet server. '
RuntimeError: You need to use the eventlet server. See the Deployment section of the documentation for more information.
As far as I unterstand and show in this example https://github.com/miguelgrinberg/Flask-SocketIO-Chat I thought when running the app like socketio.run(app), it should automatically run an eventlet server?

Cannot route message for exchange 'reply.celery.pidbox': Table empty or key no longer exists

I'm trying to spawn a few background, celery beat processes using docker-compose, but they are not working anymore. My configuration:
docker-compose-dev.yml
worker-periodic:
image: dev3_web
restart: always
volumes:
- ./services/web:/usr/src/app
- ./services/web/celery_logs:/usr/src/app/celery_logs
command: celery beat -A celery_worker.celery --schedule=/tmp/celerybeat-schedule --loglevel=DEBUG --pidfile=/tmp/celerybeat.pid
environment:
- CELERY_BROKER=redis://redis:6379/0
- CELERY_RESULT_BACKEND=redis://redis:6379/0
- FLASK_ENV=development
- APP_SETTINGS=project.config.DevelopmentConfig
- DATABASE_URL=postgres://postgres:postgres#web-db:5432/web_dev
- DATABASE_TEST_URL=postgres://postgres:postgres#web-db:5432/web_test
- SECRET_KEY=my_precious
depends_on:
- web
- redis
- web-db
links:
- redis:redis
- web-db:web-db
after I up containers, I $ docker ps and get (note how worker-periodic_1 is always up a few seconds before):
697322a621d5 dev3_web "celery worker -A ce…" 24 hours ago Up 5 minutes dev3_worker-analysis_1
d8e414aa4e5b dev3_web "celery worker -A ce…" 24 hours ago Up 5 minutes dev3_worker-learning_1
ae327266132c dev3_web "flower -A celery_wo…" 24 hours ago Up 5 minutes 0.0.0.0:5555->5555/tcp dev3_monitor_1
3ccb79e01412 dev3_web "celery beat -A cele…" 24 hours ago Up 14 seconds dev3_worker-periodic_1
a50e1276f692 dev3_web "celery worker -A ce…" 24 hours ago Up 5 minutes dev3_worker-scraping_1
All celery workers are working when endpoints are called, except when it is a celery beat, periodically automated, process. When I up containers, my logs complain at celery_logs/worker_analysis.log:
[2019-11-16 23:29:20,880: DEBUG/MainProcess] pidbox received method hello(from_node='celery#d8e414aa4e5b', revoked={}) [reply_to:{'exchange': 'reply.celery.pidbox', 'routing_key': '85f4128f-2f75-3996-8375-2a19aa58d5d4'} ticket:0daa0dc4-fa09-438d-9003-ccbd39f259dd]
[2019-11-16 23:29:20,907: INFO/MainProcess] sync with celery#d8e414aa4e5b
[2019-11-16 23:29:21,018: ERROR/MainProcess] Control command error: OperationalError("\nCannot route message for exchange 'reply.celery.pidbox': Table empty or key no longer exists.\nProbably the key ('_kombu.binding.reply.celery.pidbox') has been removed from the Redis database.\n",)
Traceback (most recent call last):
File "/usr/lib/python3.6/site-packages/kombu/connection.py", line 439, in _reraise_as_library_errors
yield
File "/usr/lib/python3.6/site-packages/kombu/connection.py", line 518, in _ensured
return fun(*args, **kwargs)
File "/usr/lib/python3.6/site-packages/kombu/messaging.py", line 203, in _publish
mandatory=mandatory, immediate=immediate,
File "/usr/lib/python3.6/site-packages/kombu/transport/virtual/base.py", line 605, in basic_publish
message, exchange, routing_key, **kwargs
File "/usr/lib/python3.6/site-packages/kombu/transport/virtual/exchange.py", line 70, in deliver
for queue in _lookup(exchange, routing_key):
File "/usr/lib/python3.6/site-packages/kombu/transport/redis.py", line 877, in _lookup
exchange, redis_key))
kombu.exceptions.InconsistencyError:
Cannot route message for exchange 'reply.celery.pidbox': Table empty or key no longer exists.
Probably the key ('_kombu.binding.reply.celery.pidbox') has been removed from the Redis database.
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/usr/lib/python3.6/site-packages/celery/worker/pidbox.py", line 46, in on_message
self.node.handle_message(body, message)
File "/usr/lib/python3.6/site-packages/kombu/pidbox.py", line 145, in handle_message
return self.dispatch(**body)
File "/usr/lib/python3.6/site-packages/kombu/pidbox.py", line 115, in dispatch
ticket=ticket)
File "/usr/lib/python3.6/site-packages/kombu/pidbox.py", line 151, in reply
serializer=self.mailbox.serializer)
File "/usr/lib/python3.6/site-packages/kombu/pidbox.py", line 285, in _publish_reply
**opts
File "/usr/lib/python3.6/site-packages/kombu/messaging.py", line 181, in publish
exchange_name, declare,
File "/usr/lib/python3.6/site-packages/kombu/connection.py", line 551, in _ensured
errback and errback(exc, 0)
File "/usr/lib/python3.6/contextlib.py", line 99, in __exit__
self.gen.throw(type, value, traceback)
File "/usr/lib/python3.6/site-packages/kombu/connection.py", line 444, in _reraise_as_library_errors
sys.exc_info()[2])
File "/usr/lib/python3.6/site-packages/vine/five.py", line 194, in reraise
raise value.with_traceback(tb)
File "/usr/lib/python3.6/site-packages/kombu/connection.py", line 439, in _reraise_as_library_errors
yield
File "/usr/lib/python3.6/site-packages/kombu/connection.py", line 518, in _ensured
return fun(*args, **kwargs)
at celer
File "/usr/lib/python3.6/site-packages/kombu/messaging.py", line 203, in _publish
mandatory=mandatory, immediate=immediate,
File "/usr/lib/python3.6/site-packages/kombu/transport/virtual/base.py", line 605, in basic_publish
message, exchange, routing_key, **kwargs
File "/usr/lib/python3.6/site-packages/kombu/transport/virtual/exchange.py", line 70, in deliver
for queue in _lookup(exchange, routing_key):
File "/usr/lib/python3.6/site-packages/kombu/transport/redis.py", line 877, in _lookup
exchange, redis_key))
kombu.exceptions.OperationalError:
Cannot route message for exchange 'reply.celery.pidbox': Table empty or key no longer exists.
Probably the key ('_kombu.binding.reply.celery.pidbox') has been removed from the Redis database.
this is how celery is configured:
web/project/config.py:
class DevelopmentConfig(BaseConfig):
# CELERY
INSTALLED_APPS = ['routes']
# celery config
CELERYD_CONCURRENCY = 4
# Add a one-minute timeout to all Celery tasks.
CELERYD_TASK_SOFT_TIME_LIMIT = 60
CELERY_ENABLE_UTC = False
CELERY_TIMEZONE = 'America/Sao_Paulo'
CELERY_BROKER_URL = os.environ.get('CELERY_BROKER')
CELERY_RESULT_BACKEND = os.environ.get('CELERY_RESULT_BACKEND')
CELERY_IMPORTS = ('project.api.routes.background',)
# periodic tasks
CELERYBEAT_SCHEDULE = {
'playlist_generator_with_audio_features': {
'task': 'project.api.routes.background.playlist_generator_with_audio_features',
# Every minute
'schedule': crontab(minute=59),
'args' : [('user_id'),]
},
'cache_user_tracks_with_analysis': {
'task': 'project.api.routes.background.cache_user_tracks_with_analysis',
# Every hour
'schedule': crontab(minute=0, hour='*/1'),
'args' : ('user_id','token')
},
}
this is an example task at project/api/routes/background.py, at my Flask server:
#celery.task(queue='analysis', default_retry_delay=30, max_retries=3, soft_time_limit=1000)
def cache_user_tracks_with_analysis(user_id, token):
# business logic
return {'Status': 'Task completed!',
'features': results}
In my requirements.txt:, kombu is not declared explicitly, and I have:
celery==4.2.1
redis==3.2.0
what am I missing?
This is an open celery/kombu issue: https://github.com/celery/kombu/issues/1063
explicitly downgrading to kombu==4.5.0 fixed the error for me.

Fix Celery Issue 'can't pickle <class 'module'>: attribute lookup module on builtins failed

I am running celery 4.1.1 on windows and sending requests to redis(on ubuntu), Redis is properly connected and tested from windows side. But when i run this command
celery -A acmetelbi worker --loglevel=info
i get this long error:
[tasks]
. accounts.tasks.myprinting
. acmetelbi.celery.debug_task
[2019-08-02 11:46:44,515: CRITICAL/MainProcess] Unrecoverable error:
PicklingErr
or("Can't pickle <class 'module'>: attribute lookup module on builtins
failed",)
Traceback (most recent call last):
File "c:\acmedata\virtualenv\bi\lib\site-
packages\celery\worker\worker.py", line 205, in start
self.blueprint.start(self)
File "c:\acmedata\virtualenv\bi\lib\site-packages\celery\bootsteps.py",
line 119, in start step.start(parent)
File "c:\acmedata\virtualenv\bi\lib\site-packages\celery\bootsteps.py",
line 370, in start return self.obj.start()
File "c:\acmedata\virtualenv\bi\lib\site-
packages\celery\concurrency\base.py",
line 131, in start self.on_start()
File "c:\acmedata\virtualenv\bi\lib\site-
packages\celery\concurrency\prefork.p
y", line 112, in on_start
**self.options)
File "c:\acmedata\virtualenv\bi\lib\site-packages\billiard\pool.py", line
1007 , in __init__ self._create_worker_process(i)
File "c:\acmedata\virtualenv\bi\lib\site-packages\billiard\pool.py", line
1116, in _create_worker_process w.start()
File "c:\acmedata\virtualenv\bi\lib\site-packages\billiard\process.py",
line 124, in start self._popen = self._Popen(self)
File "c:\acmedata\virtualenv\bi\lib\site-packages\billiard\context.py",
line 383, in _Popen return Popen(process_obj)
File "c:\acmedata\virtualenv\bi\lib\site-
packages\billiard\popen_spawn_win32.py",
line 79, in __init__ reduction.dump(process_obj, to_child)
File "c:\acmedata\virtualenv\bi\lib\site-packages\billiard\reduction.py",
line 99, in dump ForkingPickler(file, protocol).dump(obj)
_pickle.PicklingError: Can't pickle <class 'module'>: attribute lookup
module on builtins failed
(bi) C:\acmedata\bi_solution\acmetelbi>Traceback (most recent call last):
File "<string>", line 1, in <module>
File "c:\acmedata\virtualenv\bi\lib\site-packages\billiard\spawn.py",
line 165, in spawn_main exitcode = _main(fd)
File "c:\acmedata\virtualenv\bi\lib\site-packages\billiard\spawn.py",
line 207, in _main self = pickle.load(from_parent)
EOFError: Ran out of input
i am scratching my head and unable to understand how to fix this. Please help!
my Code for creating a task in django app.
#task()
def myprinting(self):
print("I am task")
and in settings.py :
Other Celery settings
CELERY_BEAT_SCHEDULE = {
'task-number-one': {
'task': 'accounts.tasks.myprinting',
'schedule': crontab(minute='*/30'),
},
after spending many days in research i have come to conclusion that celery have limitation on windows and if you want to run celery on windows then you must have to run it with gevent command:
python manage.py celery worker -P gevent --loglevel=INFO
and then after running this worker process start the celery beat accordingly to start processing.

SMTPServerDisconnected: Connection unexpectedly closed: timed out

After running the sentry on-premise docker container (version 8.20) and passing in the following eviromental variables:
-e SENTRY_EMAIL_HOST="smtp.sendgrid.net"
-e SENTRY_EMAIL_PORT=465
-e SENTRY_EMAIL_USE_TLS="True"
-e SENTRY_EMAIL_USER="apikey"
-e SENTRY_EMAIL_PASSWORD= '****'
I am receiving the following:
Traceback (most recent call last):
File "/usr/local/lib/python2.7/site-packages/celery/app/trace.py", line 240, in trace_task
R = retval = fun(*args, **kwargs)
File "/usr/local/lib/python2.7/site-packages/celery/app/trace.py", line 438, in __protected_call__
return self.run(*args, **kwargs)
File "/usr/local/lib/python2.7/site-packages/sentry/tasks/base.py", line 54, in _wrapped
result = func(*args, **kwargs)
File "/usr/local/lib/python2.7/site-packages/sentry/tasks/email.py", line 76, in send_email
send_messages([message])
File "/usr/local/lib/python2.7/site-packages/sentry/utils/email.py", line 415, in send_messages
sent = connection.send_messages(messages)
File "/usr/local/lib/python2.7/site-packages/django/core/mail/backends/smtp.py", line 87, in send_messages
new_conn_created = self.open()
File "/usr/local/lib/python2.7/site-packages/django/core/mail/backends/smtp.py", line 48, in open
local_hostname=DNS_NAME.get_fqdn())
File "/usr/local/lib/python2.7/smtplib.py", line 256, in __init__
(code, msg) = self.connect(host, port)
File "/usr/local/lib/python2.7/smtplib.py", line 317, in connect
(code, msg) = self.getreply()
File "/usr/local/lib/python2.7/smtplib.py", line 365, in getreply
+ str(e))
SMTPServerDisconnected: Connection unexpectedly closed: timed out
Any one have an idea what might be the cause?
According to Sendgrid Documentation
You can also connect via SSL on port 465.
It seems that currently django.core.mail.backends.smtp.EmailBackend does not support sending emails over ssl and only TSL.
I changed the port to 587 and emails are going through as expected.
Now, this functionality is available. You can set your own environment variables.
Press ctrl+shift+s.
File | Settings | Build, Execution, Deployment | Console | Django Console for Windows and Linux
PyCharm | Preferences | Build, Execution, Deployment | Console | Django Console for macOS
Set your variables to section variables.

Celery on Django not working

Sending emails with Celery works fine on production server.
Trying to use it on local dev (VM) and does not work.
I get this when restart:
Starting web server apache2 [ OK ]
Starting message broker rabbitmq-server [ OK ]
Starting celery task worker server celeryd [ OK ]
Starting celeryev...
: No such file or directory
Also I get this error in console when running the page:
error: [Errno 104] Connection reset by peer
Production setting:
import djcelery
djcelery.setup_loader()
BROKER_HOST = "127.0.0.1"
BROKER_PORT = 5672 # default RabbitMQ listening port
BROKER_USER = "vs_user"
BROKER_PASSWORD = "user01"
BROKER_VHOST = "vs_vhost"
CELERY_BACKEND = "amqp" # telling Celery to report the results back to RabbitMQ
CELERY_RESULT_DBURI = ""
When i ran:
sudo rabbitmqctl list_vhosts
I get this:
Listing vhosts ...
/
...done.
What i need to change in this setting to run it successfully on local VM?
UPDATE
vhost and user were definitely missing so I ran suggested commands.
They executed ok but still it does not work ,same error.
It must be one more thing that prevents it from working and celeryev is suspect.
This is what i get when stopping and starting server:
Stopping web server apache2 ... waiting . [ OK ]
Stopping message broker rabbitmq-server [ OK ]
Stopping celery task worker server celeryd start-stop-daemon: warning: failed to kill 28006: No such process
[ OK ]
Stopping celeryev...NOT RUNNING
Starting web server apache2 [ OK ]
Starting message broker rabbitmq-server [ OK ]
Starting celery task worker server celeryd [ OK ]
Starting celeryev...
: No such file or directory
Traceback (most recent call last):
File "/webapps/target/forums/json_views.py", line 497, in _send_forum_notifications
post_master_json.delay('ForumNotificationEmail', email_params)
File "/usr/local/lib/python2.6/dist-packages/celery-3.0.25-py2.6.egg/celery/app/task.py", line 357, in delay
return self.apply_async(args, kwargs)
File "/usr/local/lib/python2.6/dist-packages/celery-3.0.25-py2.6.egg/celery/app/task.py", line 474, in apply_async
**options)
File "/usr/local/lib/python2.6/dist-packages/celery-3.0.25-py2.6.egg/celery/app/amqp.py", line 250, in publish_task
**kwargs
File "/usr/local/lib/python2.6/dist-packages/kombu-2.5.16-py2.6.egg/kombu/messaging.py", line 164, in publish
routing_key, mandatory, immediate, exchange, declare)
File "/usr/local/lib/python2.6/dist-packages/kombu-2.5.16-py2.6.egg/kombu/connection.py", line 470, in _ensured
interval_max)
File "/usr/local/lib/python2.6/dist-packages/kombu-2.5.16-py2.6.egg/kombu/connection.py", line 396, in ensure_connection
interval_start, interval_step, interval_max, callback)
File "/usr/local/lib/python2.6/dist-packages/kombu-2.5.16-py2.6.egg/kombu/utils/__init__.py", line 217, in retry_over_time
return fun(*args, **kwargs)
File "/usr/local/lib/python2.6/dist-packages/kombu-2.5.16-py2.6.egg/kombu/connection.py", line 246, in connect
return self.connection
File "/usr/local/lib/python2.6/dist-packages/kombu-2.5.16-py2.6.egg/kombu/connection.py", line 761, in connection
self._connection = self._establish_connection()
File "/usr/local/lib/python2.6/dist-packages/kombu-2.5.16-py2.6.egg/kombu/connection.py", line 720, in _establish_connection
conn = self.transport.establish_connection()
File "/usr/local/lib/python2.6/dist-packages/kombu-2.5.16-py2.6.egg/kombu/transport/pyamqp.py", line 115, in establish_connection
conn = self.Connection(**opts)
File "/usr/local/lib/python2.6/dist-packages/amqp-1.0.13-py2.6.egg/amqp/connection.py", line 136, in __init__
self.transport = create_transport(host, connect_timeout, ssl)
File "/usr/local/lib/python2.6/dist-packages/amqp-1.0.13-py2.6.egg/amqp/transport.py", line 264, in create_transport
return TCPTransport(host, connect_timeout)
File "/usr/local/lib/python2.6/dist-packages/amqp-1.0.13-py2.6.egg/amqp/transport.py", line 99, in __init__
raise socket.error(last_err)
error: timed out
I ran manage.py celeryev and got console showing workers and tasks.Everything is empty and only getting Connection Error: error(timeout('timed out',),) repeatedly.
It looks like you don't have the virtual host you specified set up on your local RabbitMQ server.
You would first need to add the virtual host.
sudo rabbitmqctl add_vhost vs_vhost
Next you need to add the permissions for your user.
sudo rabbitmqctl set_permissions -p vs_vhost vs_user ".*" ".*" ".*"
Also, make sure that you actually have a user set up, otherwise you can add one using this command.
sudo rabbitmqctl add_user vs_user user01