celery: could not connect to rabbitmq - django

Using rabbitmq as broker for celery. Issue is coming while running command
celery -A proj worker --loglevel=info
celery console shows this
[2017-06-23 07:57:09,261: ERROR/MainProcess] consumer: Cannot connect to amqp://bruce:**#127.0.0.1:5672//: timed out.
Trying again in 2.00 seconds...
[2017-06-23 07:57:15,285: ERROR/MainProcess] consumer: Cannot connect to amqp://bruce:**#127.0.0.1:5672//: timed out.
Trying again in 4.00 seconds...
following are the logs from rabbitmq
=ERROR REPORT==== 23-Jun-2017::13:28:58 ===
closing AMQP connection <0.18756.0> (127.0.0.1:58424 -> 127.0.0.1:5672):
{handshake_timeout,frame_header}
=INFO REPORT==== 23-Jun-2017::13:29:04 ===
accepting AMQP connection <0.18897.0> (127.0.0.1:58425 -> 127.0.0.1:5672)
=ERROR REPORT==== 23-Jun-2017::13:29:14 ===
closing AMQP connection <0.18897.0> (127.0.0.1:58425 -> 127.0.0.1:5672):
{handshake_timeout,frame_header}
=INFO REPORT==== 23-Jun-2017::13:29:22 ===
accepting AMQP connection <0.19054.0> (127.0.0.1:58426 -> 127.0.0.1:5672)
Any input would be appreciated.

I know its late
But I came across the same issue today, spent almost an hour to find the exact fix. Thought it might help someone else
I was using celery version 4.1.0
Hope you have configured RabbitMQ properly, if not please configure it as mentioned in the page http://docs.celeryproject.org/en/latest/getting-started/brokers/rabbitmq.html#setting-up-rabbitmq
Also cross check if the broker url is correct. Here is the brocker url syntax
amqp://user_name:password#localhost/host_name
You might not need to specify the port number, since it will automatically select the default one
If you follow the same variables from the setup tutorial link above your Brocker url will be like
amqp://myuser:mypassword#localhost/myvhost
Follow this project structure
Project
../app
../Project
../settings.py
../celery.py
../tasks.py
../celery_config.py
celery_config.py
# - - - - - - - - - -
# BROKER SETTINGS
# - - - - - - - - - -
# BROKER_URL = os.environ['APP_BROKER_URL']
BROKER_HEARTBEAT = 10
BROKER_HEARTBEAT_CHECKRATE = 2.0
# Setting BROKER_POOL_LIMIT to None disables pooling
# Disabling pooling causes open/close connections for every task.
# However, the rabbitMQ cluster being behind an Elastic Load Balancer,
# the pooling is not working correctly,
# and the connection is lost at some point.
# There seems no other way around it for the time being.
BROKER_POOL_LIMIT = None
BROKER_TRANSPORT_OPTIONS = {'confirm_publish': True}
BROKER_CONNECTION_TIMEOUT = 20
BROKER_CONNECTION_RETRY = True
BROKER_CONNECTION_MAX_RETRIES = 100
celery.py
from __future__ import absolute_import, unicode_literals
from celery import Celery
from Project import celery_config
app = Celery('Project',
broker='amqp://myuser:mypassword#localhost/myvhost',
backend='amqp://',
include=['Project'])
# Optional configuration, see the application user guide.
# app.conf.update(
# result_expires=3600,
# CELERY_BROKER_POOL_LIMIT = None,
# )
app.config_from_object(celery_config)
if __name__ == '__main__':
app.start()
tasks.py
from __future__ import absolute_import, unicode_literals
from .celery import app
#app.task
def add(x, y):
return x + y
Then start the celery with “celery -A Project worker -l info” from the project directory
Everything will be fine.

set CELERY_BROKER_POOL_LIMIT = None in settings.py

This solution is for GCP users.
I've been working on GCP and faced the same issue.
The error message was :
[2022-03-15 16:56:00,318: ERROR/MainProcess] consumer: Cannot connect
to amqp://root:**#34.125.161.132:5672/vhost: timed out.
I spent almost one hour to solve this issue and finally found the solution
We have to add the port number 5672 in the Firewall rules
Steps:
Go to Firewall
select default-allow-http rule
press Edit
search "Specified protocols and ports"
add 5672 in tcp box ( example if you want to add more ports : 80,5672,8000 )
save the changes and there you go !

Related

Error in event loop with Flask, gremlin python and uWSGI

I'm running into a problem when using Flask with a gremlin database (it's an Amazon Neptune database) and using uWSGI. Everything works fine in my unit tests which use the test_client provided by Flask. However, in production we use uWSGI and there I get the following error:
There is no current event loop in thread 'uWSGIWorker4Core1'.
My app code is creating a connection to the database before a request and assigning it to the Flask g object. During teardown, the database connection is removed. The error happens when the app is trying to close the connection.
from flask import Flask, g
from gremlin_python.structure.graph import Graph
from gremlin_python.driver.driver_remote_connection import DriverRemoteConnection
from gremlin_python.process.anonymous_traversal import traversal
app = Flask(__name__, instance_relative_config=True)
#app.before_request
def _db_connect():
if not hasattr(g, 'graph_conn'):
g.graph_conn = DriverRemoteConnection(app.config['DATABASE_HOST'],'g')
g.gg = traversal().withRemote(g.graph_conn)
# This hook ensures that the connection is closed when we've finished
# processing the request.
#app.teardown_appcontext
def _db_close(exc):
if hasattr(g, 'graph_conn'):
g.graph_conn.close(). # <- ERROR THROWN AT THIS LINE
del g.graph_conn
the uWSGI config does use multiple threads:
[uwsgi]
http = 0.0.0.0:3031
manage-script-name = true
module = dogmaserver:app
processes = 4
threads = 2
offload-threads = 2
stats = 0.0.0.0:9191
But my understanding of how Flask's g object worked would be that it is all on the same thread. Can anyone let me know what I'm missing?
I'm using Flask 1.0.2, gremlinpython 3.4.11 and uWSGI 2.0.17.1.
I used a workaround by removing the threads configuration option in uWSGI which makes there only be a single thread per process.

Django Channels Worker not responding to websocket.connect

I'm having a problem with django channels. Daphne accepts WebSocket CONNECT requests properly, but then the workers doesn't respond to the request with the supplied method in consumers.py. The thing is this happens only most of the time. Sometimes it responds with the method in the consumers.py but most of the time the worker doesn't respond at all. I have a duplicate code working fine in vagrant (trusty64) environment, but the code behaves like that in an actual trusty64 machine. It should be noted that the trusty64 machine that hosts the app also has other application running (about 4 apps running at the same time).
I have a routing.py set up like this
from channels import route
from app.consumers import connect_tracking, disconnect_tracking
channel_routing = [
route("websocket.connect", connect_tracking, path=r'^/websocket/tms/tracking/stream/$'),
route("websocket.disconnect", disconnect_tracking, path=r'^/websocket/tms/tracking/stream/$'),
]
with the corresponding consumers.py that looks like this
import json
from channels import Group
from channels.sessions import channel_session
from channels.auth import http_session_user, channel_session_user, channel_session_user_from_http
from django.conf import settings
#channel_session_user_from_http
def connect_tracking(message):
group_name = settings.TRACKING_GROUP_NAME
print "%s is joining %s" % (message.user, group_name)
Group(group_name).add(message.reply_channel)
#channel_session_user
def disconnect_tracking(message):
group_name = settings.TRACKING_GROUP_NAME
print "%s is joining %s" % (message.user, group_name)
Group(group_name).discard(message.reply_channel)
and some channels related lines in settings.py like this
redis_host = os.environ.get('REDIS_HOST', 'localhost')
CHANNEL_LAYERS = {
"default": {
# This example app uses the Redis channel layer implementation asgi_redis
"BACKEND": "asgi_redis.RedisChannelLayer",
"CONFIG": {
"hosts": [(redis_host, 6379)],
},
"ROUTING": "tms_app.routing.channel_routing",
},
}
referencing another question, I've tried running daphne and worker like this
daphne tms_app.asgi:channel_layer --port 9015 --bind 0.0.0.0 -v2
python manage.py runworker -v3
I've captured daphne's and the worker's log, it looks like this
Daphne log :
2016-12-30 17:00:18,870 INFO Starting server at 0.0.0.0:9015, channel layer tms_app.asgi:channel_layer
2016-12-30 17:00:26,788 DEBUG WebSocket open for websocket.send!APpWONQKKDXR
192.168.31.197:48933 - - [30/Dec/2016:17:00:26] "WSCONNECT /websocket/tms/tracking/stream/" - -
2016-12-30 17:00:26,790 DEBUG Upgraded connection http.response!sqlMPEEtolDP to WebSocket websocket.send!APpWONQKKDXR
corresponding worker log :
2016-12-30 17:00:22,265 - INFO - runworker - Running worker against channel layer default (asgi_redis.core.RedisChannelLayer)
2016-12-30 17:00:22,265 - INFO - worker - Listening on channels http.request, websocket.connect, websocket.disconnect, websocket.receive
As you can see when there's a WSCONNECT event, the worker doesn't respond to it.
There's another question that's close to this issue that was solved by downgrading Twisted to 16.2 but it doesn't work for me.
UPDATE January 3, 2017
I cannot replicate the issue on a local vagrant machine despite using the same code and same settings for nginx, supervisor, gunicorn and daphne. I tried changed the channel layers settings so it uses IPC instead of redis and it works. Here's the settings :
CHANNEL_LAYERS = {
"default": {
"BACKEND": "asgi_ipc.IPCChannelLayer",
"ROUTING": "tms_app.routing.channel_routing",
"CONFIG": {
"prefix": "tms",
},
},
}
However this does not solve the current problem as I intend to use Redis channel layers because it's more easier to scale compared to IPC. Does this mean there's something wrong with my redis server?
I think the reason your Connection doesnt complete is because you are not sending the accept message like this:
message.reply_channel.send({'accept': True})
This is what works for my version of Channels, but you should make check the docs for your version to make sure what works for you

Tasks not executing (Django + Heroku + Celery + RabbitMQ)

I'm using RabbitMQ for the first time and I must be misunderstanding some simple configuration settings. Note that I am encountering this issue while running the app locally right now; I have not yet attempted to launch to production via Heroku.
For this app, every 20 seconds I want to look for some unsent messages in the database, and send them via Twilio. Apologies in advance if I've left some relevant code out of my examples below. I've followed all of the Celery setup/config instructions. Here is my current setup:
BROKER_URL = 'amqp://VflhnMEP:8wGLOrNBP.........Bhshs' # Truncated URL string
from datetime import timedelta
CELERYBEAT_SCHEDULE = {
'send_queued_messages_every_20_seconds': {
'task': 'comm.tasks.send_queued_messages',
'schedule': timedelta(seconds=20),
# 'schedule': crontab(seconds='*/20')
},
}
CELERY_TIMEZONE = 'UTC'
I am pretty sure that the tasks are being racked up in RabbitMQ; here is the dash that I can see with all of the accumulated messages:
The function, 'send_queued_messages' should be called every 20 seconds.
comm/tasks.py
import datetime
from celery.decorators import periodic_task
from comm.utils import get_user_mobile_number
from comm.api import get_twilio_connection, send_message
from dispatch.models import Message
#periodic_task
def send_queued_messages(run_every=datetime.timedelta(seconds=20)):
unsent_messages = Message.objects.filter(sent_success=False)
connection = get_twilio_connection()
for message in unsent_messages:
mobile_number = get_user_mobile_number(message=message)
try:
send_message(
connection=connection,
mobile_number=mobile_number,
message=message.raw_text
)
message.sent_success=True
message.save()
except BaseException as e:
raise e
pass
I'm pretty sure that I have something misconfigured with RabbitMQ or in my Heroku project settings, but I'm not sure how to continue troubleshooting. When I run 'celery -A myproject beat' everything appears to be running smoothly.
(venv)josephs-mbp:myproject josephfusaro$ celery -A myproject beat
celery beat v3.1.18 (Cipater) is starting.
__ - ... __ - _
Configuration ->
. broker -> amqp://VflhnMEP:**#happ...Bhshs
. loader -> celery.loaders.app.AppLoader
. scheduler -> celery.beat.PersistentScheduler
. db -> celerybeat-schedule
. logfile -> [stderr]#%INFO
. maxinterval -> now (0s)
[2015-05-27 03:01:53,810: INFO/MainProcess] beat: Starting...
[2015-05-27 03:02:13,941: INFO/MainProcess] Scheduler: Sending due task send_queued_messages_every_20_seconds (comm.tasks.send_queued_messages)
[2015-05-27 03:02:34,036: INFO/MainProcess] Scheduler: Sending due task send_queued_messages_every_20_seconds (comm.tasks.send_queued_messages)
So why aren't the tasks executing as they do without Celery being involved*?
My Procfile:
web: gunicorn myproject.wsgi --log-file -
worker: celery -A myproject beat
*I have confirmed that my code executes as expected without Celery being involved!
Special thanks to #MauroRocco for pushing me in the right direction on this. The pieces that I was missing were best explained in this tutorial: https://www.rabbitmq.com/tutorials/tutorial-one-python.html
Note: I needed to modify some of the code in the tutorial to use URLParameters, passing in the resource URL defined in my settings file.
The only line in send.py and receive.py is:
connection = pika.BlockingConnection(pika.URLParameters(BROKER_URL))
and of course we need to import the BROKER_URL variable from settings.py
from settings import BROKER_URL
settings.py
BROKER_URL = 'amqp://VflhnMEP:8wGLOrNBP...4.bigwig.lshift.net:10791/sdklsfssd'
send.py
import pika
from settings import BROKER_URL
connection = pika.BlockingConnection(pika.URLParameters(BROKER_URL))
channel = connection.channel()
channel.queue_declare(queue='hello')
channel.basic_publish(exchange='',
routing_key='hello',
body='Hello World!')
print " [x] Sent 'Hello World!'"
connection.close()
receive.py
import pika
from settings import BROKER_URL
connection = pika.BlockingConnection(pika.URLParameters(BROKER_URL))
channel = connection.channel()
channel.queue_declare(queue='hello')
print ' [*] Waiting for messages. To exit press CTRL+C'
def callback(ch, method, properties, body):
print " [x] Received %r" % (body,)
channel.basic_consume(callback,
queue='hello',
no_ack=True)
channel.start_consuming()

Django and Celery: Tasks not imported

I am using Django with Celery to run two tasks in the background related to contacts/email parsing.
Structure is:
project
/api
/core
tasks.py
settings.py
settings.py file contains:
BROKER_URL = 'django://'
BROKER_BACKEND = "djkombu.transport.DatabaseTransport"
#celery
BROKER_HOST = "localhost"
BROKER_PORT = 5672
BROKER_USER = "guest"
BROKER_PASSWORD = "guest"
BROKER_VHOST = "/"
sys.path.append(os.path.dirname(os.path.basename(__file__)))
CELERY_IMPORTS = ['project.core.tasks']
import djcelery
djcelery.setup_loader()
# ....
INSTALLED_APPS = (
#...
'kombu.transport.django',
'djcelery',
)
tasks.py contains:
from celery.task import Task
from celery.registry import tasks
class ParseEmails(Task):
#...
class ImportGMailContactsFromGoogleAccount(Task):
#...
tasks.register(ParseEmails)
tasks.register(ImportGMailContactsFromGoogleAccount)
Also, added in wsgi.py
os.environ["CELERY_LOADER"] = "django"
Now, I have this app hosted on a WebFactional server. On my localhost this runs fine, but on the WebFaction server, where the Django app is deployed on a Apache server, I get:
2013-01-23 17:25:00,067: ERROR/MainProcess] Task project.core.tasks.ImportGMailContactsFromGoogleAccount[df84e03f-9d22-44ed-a305-24c20407f87c] raised exception: Task of kind 'project.core.tasks.ImportGMailContactsFromGoogleAccount' is not registered, please make sure it's imported.
But the tasks show up as registered. If I run
python2.7 manage.py celeryd -l info
I obtain:
-------------- celery#web303.webfaction.com v3.0.13 (Chiastic Slide)
---- **** -----
--- * *** * -- [Configuration]
-- * - **** --- . broker: django://localhost//
- ** ---------- . app: default:0x1e55350 (djcelery.loaders.DjangoLoader)
- ** ---------- . concurrency: 8 (processes)
- ** ---------- . events: OFF (enable -E to monitor this worker)
- ** ----------
- *** --- * --- [Queues]
-- ******* ---- . celery: exchange:celery(direct) binding:celery
--- ***** -----
[Tasks]
. project.core.tasks.ImportGMailContactsFromGoogleAccount
. project.core.tasks.ParseEmails
I thought it could be a relative import error, but I assumed the changes in settings.py and wsgi.py would prevent that.
I am thinking the multiple Python version supported by WebFactional could have to do with this, however I installed all the libraries for Python 2.7 and I am also running Django for 2.7, so there should be no problem with that.
Running in localhost using celeryd -l info the Tasks also show up in the list when I start the worker but it doesn't output the error when I call the task - it runs perfectly.
Thank you
I had the same issue in a new Ubuntu 12.04 / Apache / mod_wsgi / Django 1.5 / Celery 3.0.13 production environment. Everything works fine on my Mac Os X 10.8 laptop and my old server (which has Celery 3.0.12), but not on the new server.
It seems there is some issue in Celery:
https://github.com/celery/celery/issues/1150
My initial solution was changing my Task class based task to #task decorator based, from something like this:
class CreateInstancesTask(Task):
def run(self, pk):
management.call_command('create_instances', verbosity=0, pk=pk)
tasks.register(CreateInstancesTask)
to something like this:
#task()
def create_instances_task(pk):
management.call_command('create_instances', verbosity=0, pk=pk)
Now this task seems to work, but of course I've to do some further testing...

Running periodic tasks with django and celery

I'm trying create a simple background periodic task using Django-Celery-RabbitMQ combination. I installed Django 1.3.1, I downloaded and setup djcelery. Here is how my settings.py file looks like:
BROKER_HOST = "127.0.0.1"
BROKER_PORT = 5672
BROKER_VHOST = "/"
BROKER_USER = "guest"
BROKER_PASSWORD = "guest"
....
import djcelery
djcelery.setup_loader()
...
INSTALLED_APPS = (
'djcelery',
)
And I put a 'tasks.py' file in my application folder with the following contents:
from celery.task import PeriodicTask
from celery.registry import tasks
from datetime import timedelta
from datetime import datetime
class MyTask(PeriodicTask):
run_every = timedelta(minutes=1)
def run(self, **kwargs):
self.get_logger().info("Time now: " + datetime.now())
print("Time now: " + datetime.now())
tasks.register(MyTask)
And then I start up my django server (local development instance):
python manage.py runserver
Then I start up the celerybeat process:
python manage.py celerybeat --logfile=<path_to_log_file> -l DEBUG
I can see entries like this in the log:
[2012-04-29 07:50:54,671: DEBUG/MainProcess] tasks.MyTask sent. id->72a5963c-6e15-4fc5-a078-dd26da663323
And I also can see the corresponding entries getting created in database, but I can't find where it is logging the text I specified in the actual run function in MyTask class.
I tried fiddling with the logging settings, tried using the django logger instead of celery logger, but of no use. I'm not even sure, my task is getting executed. If I print any debug information in the task, where does it go?
Also, this is first time I'm working with any type of message queuing system. It looks like the task will get executed as part of the celerybeat process - outside the django web framework. Will I still be able to access all the django models I created.
Thanks,
Venkat.
Celerybeat it stuff, which pushes task when it need, but not executing them. You tasks instances stored in RabbitMq server. You need to execute celeryd daemon for executing your tasks.
python manage.py celeryd --logfile=<path_to_log_file> -l DEBUG
Also if you using RabbitMq, I recommend to you to install special rabbitmq management plugins:
rabbitmq-plugins list
rabbitmq-enable rabbitmq_management
service rabbitmq-server restart
It will be available at http://:55672/ login: guest pass: guest. Here you can check how many tasks in your rabbit instance online.
You should check the RabbitMQ logs, since celery sends the tasks to RabbitMQ and it should execute them. So all the prints of the tasks should be in RabbitMQ logs.