I am using django celery and rabbitmq as my broker (guest rabbit user has full access on local machine). I have a bunch of projects all in their own virtualenv but recently needed celery on 2 of them. I have one instance of rabbitmq running
(project1_env)python manage.py celery worker
normal celery stuff....
[Configuration]
broker: amqp://guest#localhost:5672//
app: default:0x101bd2250 (djcelery.loaders.DjangoLoader)
[Queues]
push_queue: exchange:push_queue(direct) binding:push_queue
In my other project
(project2_env)python manage.py celery worker
normal celery stuff....
[Configuration]
broker: amqp://guest#localhost:5672//
app: default:0x101dbf450 (djcelery.loaders.DjangoLoader)
[Queues]
job_queue: exchange:job_queue(direct) binding:job_queue
When I run a task in project1 code it fires to the project1 celery just fine in the push_queue. The problem is when I am working in project2 any task tries to fire in the project1 celery even if celery isn't running on project1.
If I fire back up project1_env and start celery I get
Received unregistered task of type 'update-jobs'.
If I run list_queues in rabbit, it shows all the queues
...
push_queue 0
job_queue 0
...
My env settings and CELERYD_CHDIR and CELERY_CONFIG_MODULE are both blank.
Some things I have tried:
purging celery
force_reset on rabbitmq
rabbitmq virtual hosts as outlined in this answer: Multi Celery projects with same RabbitMQ broker backend process
moving django celery setting out and setting CELERY_CONFIG_MODULE to the proper settings
setting the CELERYD_CHDIR in both projects to the proper directory
None of these thing have stopped project2 tasks trying to work in project1 celery.
I am on Mac, if that makes a difference or helps.
UPDATE
Setting up different virtual hosts made it all work. I just had it configured wrong.
If you're going to be using the same RabbitMQ instance for both Celery instances, you'll want to use virtual hosts. This is what we use and it works. You mention that you've tried it, but your broker URLs are both amqp://guest#localhost:5672//, with no virtual host specified. If both Celery instances are connected to the same host and virtual host, they will produce to and consume from the same set of queues.
Related
I have a Django application deployed on a K8s cluster. I need to send some emails (some are scheduled, others should be sent asynchronously), and the idea was to delegate those emails to Celery.
So I set up a Redis server (with Sentinel) on the cluster, and deployed an instance for a Celery worker and another for Celery beat.
The k8s object used to deploy the Celery worker is pretty similar to the one used for the Django application. The main difference is the command introduced on the celery worker: ['celery', '-A', 'saleor', 'worker', '-l', 'INFO']
Scheduled emails are sent with no problem (celery worker and celery beat don't have any problems connecting to the Redis server). However, the asynchronous emails - "delegated" by the Django application - are not sent because it is not possible to connect to the Redis server (ERROR celery.backends.redis Connection to Redis lost: Retry (1/20) in 1.00 second. [PID:7:uWSGIWorker1Core0])
Error 1:
socket.gaierror: [Errno -5] No address associated with hostname
Error 2:
redis.exceptions.ConnectionError: Error -5 connecting to redis:6379. No address associated with hostname.
The Redis server, Celery worker, and Celery beat are in a "redis" namespace, while the other things, including the Django app, are in the "development" namespace.
Here are the variables that I define:
- name: CELERY_PASSWORD
valueFrom:
secretKeyRef:
name: redis-password
key: redis_password
- name: CELERY_BROKER_URL
value: redis://:$(CELERY_PASSWORD)#redis:6379/1
- name: CELERY_RESULT_BACKEND
value: redis://:$(CELERY_PASSWORD)#redis:6379/1
I also tried to define CELERY_BACKEND_URL (with the same value as CELERY_RESULT_BACKEND), but it made no difference.
What could be the cause for not connecting to the Redis server? Am I missing any variables? Could it be because pods are in a different namespace?
Thanks!
Solution from #sofia that helped to fix this issue:
You need to use the same namespace for the Redis server and for the Django application. In this particular case, change the namespace "redis" to "development" where the application is deployed.
I am trying to setup rabbitMQ to use as a message broker for Celery. I am trying to set these up on a Windows Server 2012 R2. After I start the rabbitMQ server using the RabbitMQ start service on the applications menu, I try to start the celery app with the command.
celery -A proj worker -l info
I get the following error after the above command.
[2018-01-09 10:03:02,515: ERROR/MainProcess] consumer: Cannot connect to amqp://
guest:**#127.0.0.1:5672//: [WinError 10042] An unknown, invalid, or unsupported
option or level was specified in a getsockopt or setsockopt call.
Trying again in 2.00 seconds...
So, I tried debugging, by check the status of the RabbitMQ server, for which I went into the RabbitMQ command prompt and typed rabbitmqctl status, on which I got the following response.
These are the services that I used to start RabbitMQ and the RabbitMQ command line
Here's my Django settings for Celery. I tried putting ports and usernames before and after the hosts, but same error.
CELERY_BROKER_URL = 'amqp://localhost//'
CELERY_RESULT_BACKEND = 'amqp://localhost//'
What is the issue here? How do I check if the RabbitMQ service started or not? What setting do I need to put on the Django Settings file.
I was fighting the same issue. Ended up downgrading amqp to 2.1.3 based on the open issue in py-amqp:
https://github.com/celery/py-amqp/issues/130
Uninstall amqp using pip uninstall amqp
Install amqp using pip install -Iv amqp==2.1.3
I have a test Django site using a mod_wsgi daemon process, and have set up a simple Celery task to email a contact form, and have installed supervisor.
I know that the Django code is correct.
The problem I'm having is that when I submit the form, I am only getting one message - the first one. Subsequent completions of the contact form do not send any message at all.
On my server, I have another test site with a configured supervisor task running which uses the Django server (ie it's not using mod_wsgi). Both of my tasks are running fine if I do
sudo supervisorctl status
Here is my conf file for the task I've described above which is saved at
/etc/supervisor/conf.d
the user in this instance is called myuser
[program:test_project]
command=/home/myuser/virtualenvs/test_project_env/bin/celery -A test_project worker --loglevel=INFO --concurrency=10 -n worker2#%%h
directory=/home/myuser/djangoprojects/test_project
user=myuser
numprocs=1
stdout_logfile=/var/log/celery/test_project.out.log
stderr_logfile=/var/log/celery/test_project.err.log
autostart=true
autorestart=true
startsecs=10
; Need to wait for currently executing tasks to finish at shutdown.
; Increase this if you have very long running tasks.
stopwaitsecs = 600
stopasgroup=true
; Set Celery priority higher than default (999)
; so, if rabbitmq is supervised, it will start first.
priority=1000
My other test site has this set as the command - note worker1#%%h
command=/home/myuser/virtualenvs/another_test_project_env/bin/celery -A another_test_project worker --loglevel=INFO --concurrency=10 -n worker1#%%h
I'm obviously doing something wrong in that my form is only submitted. If I look at the out.log file referred to above, I only see the first task, nothing is visible for the other form submissions.
Many thanks in advance.
UPDATE
I submitted the first form at 8.32 am (GMT) which was received, and then as described above, another one shortly thereafter for which a task was not created. Just after finishing the question, I submitted the form again at 9.15, and for this a task was created and the message received! I then submitted the form again, but no task was created again. Hope this helps!
use ps auxf|grep celery to see how many worker you started,if there is any other worker you start before and you don't kill it ,the worker you create before will consume the task,result in you every two or three(or more) times there is only one task is received.
and you need to stop celery by:
sudo supervisorctl -c /etc/supervisord/supervisord.conf stop all
everytime, and set this in supervisord.conf:
stopasgroup=true ; send stop signal to the UNIX process group (default false)
Otherwise it will causes memory leaks and regular task loss.
If you has multi django site,here is a demo support by RabbitMQ:
you need add rabbitmq vhost and set user to vhost:
sudo rabbitmqctl add_vhost {vhost_name}
sudo rabbitmqctl set_permissions -p {vhost_name} {username} ".*" ".*" ".*"
and different site use different vhost(but can use same user).
add this to your django settings.py:
BROKER_URL = 'amqp://username:password#localhost:5672/vhost_name'
some info here:
Using celeryd as a daemon with multiple django apps?
Running multiple Django Celery websites on same server
Run Multiple Django Apps With Celery On One Server With Rabbitmq VHosts
Run Multiple Django Apps With Celery On One Server With Rabbitmq VHosts
We have two servers, Server A and Server B. Server A is dedicated for running django web app. Due to large number of data we decided to run the celery tasks in server B. Server A and B uses a common database. Tasks are initiated after post save in models from Server A,webapp. How to implement this idea using rabbitmq in my django project
You have 2 servers, 1 project and 2 settings(1 per server).
server A (web server + rabbit)
server B (only celery for workers)
Then you set up the broker url in both settings. Something like this:
BROKER_URL = 'amqp://user:password#IP_SERVER_A:5672//' matching server A to IP of server A in server B settings.
For now, any task must be sent to rabbit in server A to virtual server /.
In server B, you must just initialize celery worker, something like this:
python manage.py celery worker -Q queue_name -l info
and thats it.
Explanation: django sends messages to rabbit to queue a task, then celery workers request some message to execute a task.
Note: Is not required that rabbitMQ have to be installed in server A, you can install rabbit in server C and reference it in the BROKER_URL in both settings(A and B) like this: BROKER_URL='amqp://user:password#IP_SERVER_C:5672//'.
Sorry for my English.
greetings.
I created a rabbitmq cluster on two instances on EC2. My django app uses celery for async tasks which in turn uses RabbitMQ for message queue.
Whenever I start celery with the command:
python manage.py celery worker --loglevel=INFO
OR
python manage.py celeryd --loglevel=INFO
I keep getting following error message related to remote RabbitMQ:
[2015-05-19 08:58:47,307: ERROR/MainProcess] consumer: Cannot connect to amqp://myuser:**#<ip-address>:25672/myvhost/: Socket closed.
Trying again in 2.00 seconds...
I set permissions using:
sudo rabbitmqctl set_permissions -p myvhost myuser ".*" ".*" ".*"
and then restarted rabbitmq-server on both the cluster nodes. However, it didn't help.
In log file, I see few entries like below:
=INFO REPORT==== 19-May-2015::08:14:41 ===
accepting AMQP connection <0.1981.0> (<ip-address>:38471 -> <ip-address>:5672)
=ERROR REPORT==== 19-May-2015::08:14:44 ===
closing AMQP connection <0.1981.0> (<ip-address>:38471 -> <ip-address>:5672):
{handshake_error,opening,0,
{amqp_error,access_refused,
"access to vhost 'myvhost' refused for user 'myuser'",
'connection.open'}}
The file /usr/local/etc/rabbitmq/rabbitmq-env.conf contains an entry for NODE_IP_ADDRESS to bind it only to localhost. Removing the NODE_IP_ADDRESS entry from the config binds the port to all network inferfaces.
Source: https://superuser.com/questions/464311/open-port-5672-tcp-for-access-to-rabbitmq-on-mac
Turns out I had not created appropriate configuration files. In my case (Ubuntu 14.04), I had to create below two configuration files:
$ cat /etc/rabbitmq/rabbitmq-env.conf
RABBITMQ_NODE_IP_ADDRESS=<ip_of_ec2_instance>
<ip_of_ec2_instance> has to be the internal IP that EC2 uses. Not the public IP that one uses to ssh into the instance. It can be obtained using ip a command.
$ cat /etc/rabbitmq/rabbitmq.config
[
{mnesia, [{dump_log_write_threshold, 1000}]},
{rabbit, [{tcp_listeners, [25672]}]},
{rabbit, [{loopback_users, []}]}
].
I think the line {rabbit, [{tcp_listeners, [25672]}]}, was one of the most important piece of configuration that I was missing.
Thanks #dgil for the initial troubleshooting help.
The question has been answered. but just leaving notes with a similar issue i faced should anybody else find it useful
I have a flask app running on ec2 with amqp as a broker on port 5672 and ec2 elasticcache memcached as a backend. The amqp broker had trouble picking up tasks that were getting fired - so i resolved it by fixing as such
Assuming you have rabbitmq-server installed (sudo apt-get install rabbitmq-server), add the user and set the properties as such
sudo add_user username password
set_permissions username ".*" ".*" ".*"
restart server: sudo service rabbitmq-server restart
In your flask app for the celery configuration
broker_url=amqp://username:password#localhost:5672// (Set as above)
backend=cache+memcached://(ec2 cache url):11211/
(The cache+memcached:// tripped me up - without it i kept getting an import error (cannot import module)
Open up the port 5672 on your ec2 instance in the security group.
Now if you fire up your celery worker, it should pick up the the tasks that get fired and store the results on your memcached server