I cannot figure out why celeryd is not pulling new tasks that are added to the queue. It will only retrieve tasks once it is started, then fails to monitor thereafter. I am running the Django development server with django-celery using the Django ORM for the message broker.
My Django settings file has this configuration for celery:
INSTALLED_APPS += ("djcelery", )
INSTALLED_APPS += ("djkombu", )
import djcelery
djcelery.setup_loader()
BROKER_TRANSPORT = "djkombu.transport.DatabaseTransport"
BROKER_HOST = "localhost"
BROKER_PORT = 5672
BROKER_USER = "guest"
BROKER_PASSWORD = "guest"
BROKER_VHOST = "/"
Furthermore if it matters, I am using a remote MySQL server as my database backend.
Edit: Resolved!
After spending several hours on this, I found an answer in the Celery FAQ:
MySQL is throwing deadlock errors, what can I do?
I wasn't seeing the deadlock errors in the log, but modifying my /etc/my.cnf to include:
[mysqld]
transaction-isolation = READ-COMMITTED
resolved my issue.
Hope this helps someone else!
After spending several hours on this, I found an answer in the Celery FAQ:
MySQL is throwing deadlock errors, what can I do?
I wasn't seeing the deadlock errors in the log, but modifying my /etc/my.cnf to include:
[mysqld]
transaction-isolation = READ-COMMITTED
resolved my issue.
Hope this helps someone else!
Related
I am getting the below error after I deployed my website on heroku.
Error 111 connecting to 127.0.0.1:6379. Connection refused.
Request Method: POST
Request URL: https://website.herokuapp.com/account/register
Django Version: 3.2.8
Exception Type: OperationalError
Exception Value:
Error 111 connecting to 127.0.0.1:6379. Connection refused.
Exception Location: /app/.heroku/python/lib/python3.8/site-packages/kombu/connection.py, line 451, in _reraise_as_library_errors
Python Executable: /app/.heroku/python/bin/python
Python Version: 3.8.12
Python Path:
['/app',
'/app/.heroku/python/bin',
'/app',
'/app/.heroku/python/lib/python38.zip',
'/app/.heroku/python/lib/python3.8',
'/app/.heroku/python/lib/python3.8/lib-dynload',
'/app/.heroku/python/lib/python3.8/site-packages']
Server time: Sat, 11 Dec 2021 21:17:12 +0530
So basically my website has to send email regarding otp, after registration and also some contract related emails. These email are neccessary to be sent hence can't be avoided. I posted a question earlier here regardig how to minimize the time that sending emails takes so that the user doesn't have to wait the entire time. I was suggested to use asynchronous code for this. So i decided to use celery for this. I followed the youtube video that taught how to use it.
Now after I pushed the code in the website I am getting this error. I am beginner andd have no idea how to rectify it. Please suggest me what shoul I do. Below are the details and configurations.
settings.py
CELERY_BROKER_URL = 'redis://127.0.0.1:6379'
CELERY_RESULT_BACKEND = 'redis://127.0.0.1:6379'
CELERY_ACCEPT_CONTENT =['application/json']
CELERY_RESULT_SERIALIZER = 'json'
CELERY_TASK_SELERLIZER = 'json'
requirements.txt
amqp==5.0.6
asgiref==3.4.1
billiard==3.6.4.0
celery==5.2.1
click==8.0.3
click-didyoumean==0.3.0
click-plugins==1.1.1
click-repl==0.2.0
colorama==0.4.4
Deprecated==1.2.13
dj-database-url==0.5.0
Django==3.2.8
django-ckeditor==6.1.0
django-filter==21.1
django-js-asset==1.2.2
django-multiselectfield==0.1.12
dnspython==2.1.0
As I mentioned I am beginer, please suggest me a detailed ans as to how I can recctify this error.
Here's the problem:
CELERY_BROKER_URL = 'redis://127.0.0.1:6379'
Redis won't be running on your local dyno. You'll have to run it somewhere else and configure your code to connect to it. A common choice is to run Redis via an addon:
Once you’ve chosen a broker, create your Heroku app and attach the add-on to it. In the examples we’ll use Heroku Redis as the Redis provider but there are plenty of other Redis providers in the Heroku Elements Marketplace.
If you choose to use Heroku Redis, you'll be able to get the connection string to your instance via the REDIS_URL environment variable:
Heroku add-ons provide your application with environment variables which can be passed to your Celery app. For example:
import os
app.conf.update(BROKER_URL=os.environ['REDIS_URL'],
CELERY_RESULT_BACKEND=os.environ['REDIS_URL'])
Your Celery app now knows to use your chosen broker and result store for all of the tasks you define in it.
Other addons will provide similar configuration mechanisms.
All quoted documentation here, and most links, come from Heroku's Using Celery on Heroku article. I suggest you read the entire document for more information.
I'm building a server with Flask/Gunicorn and Nginx. My script (Flask server) does two things with 'threading':
connect to MQTT broker
run a flask server
But when i try using gunicorn: gunicorn --bind 0.0.0.0:5000 wsgi:app, the first thread doens't run.
Here is the code (not complet):
import threading
def run_mqtt():
while True:
mqtt_client.connect(mqtt_server, port=mqtt_port)
def run_server():
app.run(host='0.0.0.0', port=5000, debug=False)
if __name__ == '__main__':
t1 = threading.Thread(target=run_mqtt)
t2 = threading.Thread(target=run_server)
t1.daemon = True
t2.daemon = True
t1.start()
t2.start()
Please help me, i have to find the solution very fast! Thanks!!
Gunicorn is based on the pre-fork worker model. That means that when it starts, it has a master process and spawns off worker processes as necessary. Most likely the first thread did run, but you lost track of it in the other prefork processes.
If you want to have a background thread that flask controllers can interact with and share memory with, it's unlikely that gunicorn is a good solution for you.
My Django site running on Heroku is using CloudAMQP to handle its scheduled Celery tasks. CloudAMQP is registering many more messages than I have tasks, and I don't understand why.
e.g., in the past couple of hours I'll have run around 150 scheduled tasks (two that run once a minute, another that runs once every five minutes), but the CloudAMQP console's Messages count increased by around 1,300.
My relevant Django settings:
BROKER_URL = os.environ.get("CLOUDAMQP_URL", "")
BROKER_POOL_LIMIT = 1
BROKER_HEARTBEAT = None
BROKER_CONNECTION_TIMEOUT = 30
CELERY_ACCEPT_CONTENT = ['json',]
CELERY_TASK_SERIALIZER = 'json'
CELERY_RESULT_SERIALIZER = 'json'
CELERY_TASK_RESULT_EXPIRES = 7 * 86400
CELERY_SEND_EVENTS = False
CELERY_EVENT_QUEUE_EXPIRES = 60
CELERY_RESULT_BACKEND = None
CELERYBEAT_SCHEDULER = 'djcelery.schedulers.DatabaseScheduler'
My Procfile:
web: gunicorn myproject.wsgi --log-file -
main_worker: python manage.py celery worker --beat --without-gossip --without-mingle --without-heartbeat --loglevel=info
Looking at the Heroku logs I only see the number of scheduled tasks running that I'd expect.
The RabbitMQ overview graphs tend to look like this most of the time:
I don't understand RabbitMQ enough to know if other panels shed light on the problem. I don't think they show anything obvious that would account for all these messages.
I'd like to at least understand what the extra messages are, and then whether there's a way I can eliminate some or all of them.
I'had got the same error few days ago.
For those who get the same issue, CloudAMQP's doc recommend to add some arguments when you launch your celery :
--without-gossip --without-mingle --without-heartbeat
"This will decrease the message rates substantially. Without these
flags Celery will send hundreds of messages per second with different
diagnostic and redundant heartbeat messages."
And indeed, this fix the issue so far ! And you finally get only YOUR messages sent.
I am going nuts here, I am trying to get django logging to work, which led me to need to check the gunicorn logs which I found were not setup and when i try to set them up via the config file it throws a 502 and I don't know how to track it down...
according to this... https://www.digitalocean.com/community/tutorials/how-to-use-the-django-one-click-install-image
the config file on the DO deployment is at... /etc/gunicorn.d/gunicorn.py
now when I go to that file and add a line like this as per the gunicorn docs tell me to do in the config file, and then I restart gunicorn it throws a 502 at me...
"""gunicorn WSGI server configuration."""
from multiprocessing import cpu_count
from os import environ
def max_workers():
return cpu_count() * 2 + 1
max_requests = 1000
worker_class = 'gevent'
workers = max_workers()
#my added config
errorlog = '/var/log/gunicorn/error.log'
Interesting to note is that when I srestart gunicorn with
sudo service gunicorn restart
it goes smoothly the first time and then if I try to restart it again it gives me this...
stop: Unknown instance:
gunicorn start/running, process 30943
I have tried contacting digital ocean about this and so far they have not been helpful, any ideas of where to go next?
This was cause by the permissions of the log files for some reason. Through some more testing I have found that the log files need to be already in place, and their permissions need to be set to no less than 666.
I have no idea why they need to be this high...
I am very new to django and celery. I was reading their docs but I have problems with understanding them.
They have this code in their docs:
BROKER_BACKEND = "djkombu.transport.DatabaseTransport"
#celery
BROKER_HOST = "localhost"
BROKER_PORT = 5672
BROKER_USER = "guest"
BROKER_PASSWORD = "guest"
BROKER_VHOST = "/"
Now which host, username and password should I enter there? Should it be the VPS host details, database details, or something else?
That would be the settings for the message broker that celery will use. You will therefore need to install and run a message broker first, e.g. RabbitMQ.
In fact, you might want to read the docs for celery first. Once you know how to get celery running, the answer to your question will be obvious.
Good luck.