I am trying to setup rabbitMQ to use as a message broker for Celery. I am trying to set these up on a Windows Server 2012 R2. After I start the rabbitMQ server using the RabbitMQ start service on the applications menu, I try to start the celery app with the command.
celery -A proj worker -l info
I get the following error after the above command.
[2018-01-09 10:03:02,515: ERROR/MainProcess] consumer: Cannot connect to amqp://
guest:**#127.0.0.1:5672//: [WinError 10042] An unknown, invalid, or unsupported
option or level was specified in a getsockopt or setsockopt call.
Trying again in 2.00 seconds...
So, I tried debugging, by check the status of the RabbitMQ server, for which I went into the RabbitMQ command prompt and typed rabbitmqctl status, on which I got the following response.
These are the services that I used to start RabbitMQ and the RabbitMQ command line
Here's my Django settings for Celery. I tried putting ports and usernames before and after the hosts, but same error.
CELERY_BROKER_URL = 'amqp://localhost//'
CELERY_RESULT_BACKEND = 'amqp://localhost//'
What is the issue here? How do I check if the RabbitMQ service started or not? What setting do I need to put on the Django Settings file.
I was fighting the same issue. Ended up downgrading amqp to 2.1.3 based on the open issue in py-amqp:
https://github.com/celery/py-amqp/issues/130
Uninstall amqp using pip uninstall amqp
Install amqp using pip install -Iv amqp==2.1.3
Related
I have a kombu consumer which sends a task periodically to my celery application(using rabbitmq backend) as such:
celery_app.send_task('fetch_data', args=[], kwargs={})
The celery app receives the task and is able to execute it. However after a while when it receives another task in a similar fashion, I get a
[Errno 104] Connection reset by peer
In the rabbitmq logs I see a lot of heartbeat missed warnings.
I have already tried setting
broker_pool_limit = None
in my celery app, but it did not solve the problem.
How do I rectify this?
I created a rabbitmq cluster on two instances on EC2. My django app uses celery for async tasks which in turn uses RabbitMQ for message queue.
Whenever I start celery with the command:
python manage.py celery worker --loglevel=INFO
OR
python manage.py celeryd --loglevel=INFO
I keep getting following error message related to remote RabbitMQ:
[2015-05-19 08:58:47,307: ERROR/MainProcess] consumer: Cannot connect to amqp://myuser:**#<ip-address>:25672/myvhost/: Socket closed.
Trying again in 2.00 seconds...
I set permissions using:
sudo rabbitmqctl set_permissions -p myvhost myuser ".*" ".*" ".*"
and then restarted rabbitmq-server on both the cluster nodes. However, it didn't help.
In log file, I see few entries like below:
=INFO REPORT==== 19-May-2015::08:14:41 ===
accepting AMQP connection <0.1981.0> (<ip-address>:38471 -> <ip-address>:5672)
=ERROR REPORT==== 19-May-2015::08:14:44 ===
closing AMQP connection <0.1981.0> (<ip-address>:38471 -> <ip-address>:5672):
{handshake_error,opening,0,
{amqp_error,access_refused,
"access to vhost 'myvhost' refused for user 'myuser'",
'connection.open'}}
The file /usr/local/etc/rabbitmq/rabbitmq-env.conf contains an entry for NODE_IP_ADDRESS to bind it only to localhost. Removing the NODE_IP_ADDRESS entry from the config binds the port to all network inferfaces.
Source: https://superuser.com/questions/464311/open-port-5672-tcp-for-access-to-rabbitmq-on-mac
Turns out I had not created appropriate configuration files. In my case (Ubuntu 14.04), I had to create below two configuration files:
$ cat /etc/rabbitmq/rabbitmq-env.conf
RABBITMQ_NODE_IP_ADDRESS=<ip_of_ec2_instance>
<ip_of_ec2_instance> has to be the internal IP that EC2 uses. Not the public IP that one uses to ssh into the instance. It can be obtained using ip a command.
$ cat /etc/rabbitmq/rabbitmq.config
[
{mnesia, [{dump_log_write_threshold, 1000}]},
{rabbit, [{tcp_listeners, [25672]}]},
{rabbit, [{loopback_users, []}]}
].
I think the line {rabbit, [{tcp_listeners, [25672]}]}, was one of the most important piece of configuration that I was missing.
Thanks #dgil for the initial troubleshooting help.
The question has been answered. but just leaving notes with a similar issue i faced should anybody else find it useful
I have a flask app running on ec2 with amqp as a broker on port 5672 and ec2 elasticcache memcached as a backend. The amqp broker had trouble picking up tasks that were getting fired - so i resolved it by fixing as such
Assuming you have rabbitmq-server installed (sudo apt-get install rabbitmq-server), add the user and set the properties as such
sudo add_user username password
set_permissions username ".*" ".*" ".*"
restart server: sudo service rabbitmq-server restart
In your flask app for the celery configuration
broker_url=amqp://username:password#localhost:5672// (Set as above)
backend=cache+memcached://(ec2 cache url):11211/
(The cache+memcached:// tripped me up - without it i kept getting an import error (cannot import module)
Open up the port 5672 on your ec2 instance in the security group.
Now if you fire up your celery worker, it should pick up the the tasks that get fired and store the results on your memcached server
I use Celery with Django to consume/publish tasks to RabbitMQ from ~20 workers across a few datacenters. After about a month or so, I'm at 8000 open socket descriptors and the number keeps increasing until I restart RabbitMQ. Often I "kill -9" the Celery worker process instead of shutting them down since I do not want to wait for jobs to finish. On the workers I do not see the connections that RabbitMQ is showing. Is there a way to purge the old connections from RabbitMQ?
I'm using Celery 3.1.13 and RabbitMQ 3.2.4, all on Ubuntu 14.04. I'm not using librabbitmq, but pyamqp.
I was getting the same issue with the following 3-machine setup:
Worker (Ubuntu 14.04)
amqp==1.4.6
celery==3.1.13
kombu==3.0.21
Django App Server (Ubuntu 14.04)
amqp==1.4.2
celery==3.1.8
kombu==3.0.10
Rabbit MQ Server (Ubuntu 14.04 | rabbitmq-server 3.2.4)
Each task the worker received opened one connection that never closed (according to the RabbitMQ log) and consumed ~2-3 MB of memory.
I have since upgraded Celery to the latest version on my Django server and the socket descriptors and memory usage are holding steady.
I also see the connections close in the RabbitMQ log after the task completes, like so:
closing AMQP connection <0.12345.0> (192.168.1.100:54321 -> 192.168.1.100:5672):
connection_closed_abruptly
Use BROKER_HEARTBEAT in Django's settings.py file.
RabbitMQ expects this value from the client(Celery in this case).
Refer to
http://docs.celeryproject.org/en/latest/userguide/configuration.html#std:setting-broker_heartbeat for more details.
I have a Django app that I've already deployed to Heroku. This app uses Celery for message queuing and I've run it locally using RabbitMQ without incident.
Unfortunately, when I went to deploy this baby to Heroku, I found that the RabbitMQ addon wasn't available and that I'd have to use CloudAMQP. The documentation for both CloudAMQP and Heroku lead me to believe that I can use Celery (even though they recommend Pika), but when I try to deploy, I get gnarly connection errors for both my scheduler and worker processes. Here are the exact errors:
2012-07-09T16:46:22+00:00 app[scheduler.1]: [2012-07-09 11:46:22,234: ERROR/Beat] Celerybeat: Connection error: [Errno 111] Connection refused. Trying again in 2.0 seconds...
2012-07-09T16:46:23+00:00 app[worker.1]: [2012-07-09 11:46:23,852: ERROR/MainProcess] Consumer: Connection Error: [Errno 111] Connection refused. Trying again in 2 seconds...
I should note that my Heroku config vars DO have a CLOUDAMQP_URL, so that shouldn't be a problem?
I would appreciate it if anyone who has used CloudAMQP with Django/Heroku could give me some guidance about how to make sure that Celery can connect with the broker.
You're probably exceeding the 3 concurrent connections limit of the free plan. Set the BROKER_POOL_LIMIT to 1 and it should work a lot better.
Make sure that you have this at the top of your settings.py file.
import djcelery
djcelery.setup_loader()
I am trying to get Django-Celery running on my Django App. I cannot get the worker server to run. When I try I get the message: No Connection could be made because the target machine actively refused it
Here is what I have done so far. First, I installed the django celery package: http://pypi.python.org/pypi/django-celery
I can load it into python without problems. I also installed the RabbitMQ server per the windows install instructions: http://www.rabbitmq.com/install.html#windows
Starting the tutorials in pytho on the RabbitMQ site I saw the need to install pika: http://pypi.python.org/pypi/pika. It imports without any problems.
From there I start the RabbitMQ server by running this at the command line: rabbitmq-service start
I get the message back that Service RabbitMQ started
Here is where I start to have problems.
I attempted the first steps in django-celery: http://packages.python.org/django-celery/getting-started/first-steps-with-django.html and the "hello world" example on the rabbitMQ site: http://www.rabbitmq.com/tutorials/tutorial-one-python.html
In both cases I get the message: No Connection could be made because the target machine actively refused it
My first thought was that this sounded like a firewall problem. So I went into the windows 7 firewall and added inbound and outbound rules to open the local and remote ports 5672 and 5673 to TCP protocol, but I still get the same error message.
When I run rabbitmqctl status i get the message:
Error: unable to connect to node 'rabbit#hostname': nodedown
diagnostics:
- nodes and their ports on hostname: [{rabbitmqctl18856, 505031}]
Does that mean it that it is trying to operate on those ports? what about the default 5672?
Any suggestions?
UPDATE: This was actually a problem resulting from several failed rabbitmq installs conflicting with the latest installation. If you have to remove rabbitmq use the 'rabbitmq-service remove' command and not SC DELETE, which cause a lot of problems for me and I had to go in and clean up my windows registry file.
The nodedown error indicated by rabbitmqctl suggests that the server isn't running on that machine.
Try going though the steps in RabbitMQ's troubleshooting guide. In particular, pay close attention to the logs. Has the server crashed for some reason? Could you post the logs somewhere?