Do I need some special requirements for installing Django celery - django

I am very new to django and celery. I was reading their docs but I have problems with understanding them.
They have this code in their docs:
BROKER_BACKEND = "djkombu.transport.DatabaseTransport"
#celery
BROKER_HOST = "localhost"
BROKER_PORT = 5672
BROKER_USER = "guest"
BROKER_PASSWORD = "guest"
BROKER_VHOST = "/"
Now which host, username and password should I enter there? Should it be the VPS host details, database details, or something else?

That would be the settings for the message broker that celery will use. You will therefore need to install and run a message broker first, e.g. RabbitMQ.
In fact, you might want to read the docs for celery first. Once you know how to get celery running, the answer to your question will be obvious.
Good luck.

Related

OperationalError, Error 111 connecting to 127.0.0.1:6379. Connection refused. After deploying in heroku

I am getting the below error after I deployed my website on heroku.
Error 111 connecting to 127.0.0.1:6379. Connection refused.
Request Method: POST
Request URL: https://website.herokuapp.com/account/register
Django Version: 3.2.8
Exception Type: OperationalError
Exception Value:
Error 111 connecting to 127.0.0.1:6379. Connection refused.
Exception Location: /app/.heroku/python/lib/python3.8/site-packages/kombu/connection.py, line 451, in _reraise_as_library_errors
Python Executable: /app/.heroku/python/bin/python
Python Version: 3.8.12
Python Path:
['/app',
'/app/.heroku/python/bin',
'/app',
'/app/.heroku/python/lib/python38.zip',
'/app/.heroku/python/lib/python3.8',
'/app/.heroku/python/lib/python3.8/lib-dynload',
'/app/.heroku/python/lib/python3.8/site-packages']
Server time: Sat, 11 Dec 2021 21:17:12 +0530
So basically my website has to send email regarding otp, after registration and also some contract related emails. These email are neccessary to be sent hence can't be avoided. I posted a question earlier here regardig how to minimize the time that sending emails takes so that the user doesn't have to wait the entire time. I was suggested to use asynchronous code for this. So i decided to use celery for this. I followed the youtube video that taught how to use it.
Now after I pushed the code in the website I am getting this error. I am beginner andd have no idea how to rectify it. Please suggest me what shoul I do. Below are the details and configurations.
settings.py
CELERY_BROKER_URL = 'redis://127.0.0.1:6379'
CELERY_RESULT_BACKEND = 'redis://127.0.0.1:6379'
CELERY_ACCEPT_CONTENT =['application/json']
CELERY_RESULT_SERIALIZER = 'json'
CELERY_TASK_SELERLIZER = 'json'
requirements.txt
amqp==5.0.6
asgiref==3.4.1
billiard==3.6.4.0
celery==5.2.1
click==8.0.3
click-didyoumean==0.3.0
click-plugins==1.1.1
click-repl==0.2.0
colorama==0.4.4
Deprecated==1.2.13
dj-database-url==0.5.0
Django==3.2.8
django-ckeditor==6.1.0
django-filter==21.1
django-js-asset==1.2.2
django-multiselectfield==0.1.12
dnspython==2.1.0
As I mentioned I am beginer, please suggest me a detailed ans as to how I can recctify this error.
Here's the problem:
CELERY_BROKER_URL = 'redis://127.0.0.1:6379'
Redis won't be running on your local dyno. You'll have to run it somewhere else and configure your code to connect to it. A common choice is to run Redis via an addon:
Once you’ve chosen a broker, create your Heroku app and attach the add-on to it. In the examples we’ll use Heroku Redis as the Redis provider but there are plenty of other Redis providers in the Heroku Elements Marketplace.
If you choose to use Heroku Redis, you'll be able to get the connection string to your instance via the REDIS_URL environment variable:
Heroku add-ons provide your application with environment variables which can be passed to your Celery app. For example:
import os
app.conf.update(BROKER_URL=os.environ['REDIS_URL'],
CELERY_RESULT_BACKEND=os.environ['REDIS_URL'])
Your Celery app now knows to use your chosen broker and result store for all of the tasks you define in it.
Other addons will provide similar configuration mechanisms.
All quoted documentation here, and most links, come from Heroku's Using Celery on Heroku article. I suggest you read the entire document for more information.

Django always tries to establish SSL connection with localhost

After setting DEBUG = False, and SECURE_SSL_REDIRECT = True and deploying a version on my app to the server, I wish now to develop further locally. The problem is, I think at one point I forgot to remove the SECURE_SSL_REDIRECT = True from settings.py, and I ran the local dev server with heroku local. My local browser always tries not to connect to localhost with SSL, so it just hangs.
I tried removing the site-specific cookies for localhost in the browser settings (Chrome) but localhost now still always tries to establish an SSL connection.
I am trying just to get back to using a non-SSL local connection for development. Any Ideas?
Django version 1.10.2
Heroku
Thanks
EDIT
Seems if I clear ALL the cache and cookies and restart the browser then it will not ask for SSL again. So it seems to be a browser problem. Anyway if anyone has an idea of how to accomplish this without having to clear all the data in Chrome, that would be appreciated.
UPDATE
I have learned a better way to handle this situation. I have set up some code to automatically sense if the software is running on the local environment or the cloud production environment, like this:
if os.environ.get('LOCAL'):
DEBUG = True
SECURE_SSL_REDIRECT = False
else:
DEBUG = False
SECURE_SSL_REDIRECT = True
Of course you have to take care of setting up the environ object, which happens automatically in heroku.
I am using custom settings both for local (development) and production environments.
For instance:
myproject/settings/dev.py
DEBUG = True
SECURE_SSL_REDIRECT = False
...
myproject/settings/production.py
DEBUG = False
SECURE_SSL_REDIRECT = True
...
And then I specify the settings I want to use. On localhost like this:
python myproject/manage.py runserver --settings=settings.dev
and for production using the Heroku Procfile:
web: gunicorn myproject.wsgi.production --log-file -
Content of myproject/wsgi/production.py is:
import os
os.environ.setdefault("DJANGO_SETTINGS_MODULE", "myproject.settings.production")
from django.core.wsgi import get_wsgi_application
application = get_wsgi_application()
If you use heroku local for your local development, you can create similar Procfile.local with local wsgi file.

How to make redis BROKER_URL dynamic on deployment to AWS instance

I'm deploying a Django app which uses celery task and has redis as the broker backend. I'm using docker for deployment and my production server is an amazon aws instance. The problem I'm facing is that the django settings is different for localhost:
BROKER_URL = 'redis://localhost:6379'
CELERY_RESULT_BACKEND = 'redis://localhost:6379'
and all my unit tests work. For docker it fails unless I change it to
BROKER_URL = 'redis://redis:6379'
CELERY_RESULT_BACKEND = 'redis://redis:6379'
My question is, how do I identify the redis broker url in my deployment server? Will it be redis://redis:6379?
PS: For heroku server there's an add-on for identifying the redis url call REDISTOGO_URL. Is there something similar for amazon aws server?
BROKER_URL = 'redis://localhost:6379'
CELERY_RESULT_BACKEND = 'redis://localhost:6379'
The above implies that both redis and celery are running on localhost, the same machine on which your django app is running.
Please check:
1) Redis is installed on the server, and is running. (sudo service redis-server start)
2) Celery is installed on the server, and is running.
BROKER_URL = 'redis://redis:6379'
CELERY_RESULT_BACKEND = 'redis://redis:6379'
If you are using docker, the above implies that there is another docker container which is running redis, and your code container is linked to the redis container with the alias 'redis'

django :how to configure celery redis

According to celery using redis docs
Configuration
Configuration is easy, just configure the location of your Redis database:
BROKER_URL = 'redis://localhost:6379/0'
Where the URL is in the format of:
redis://:password#hostname:port/db_number
all fields after the scheme are optional, and will default to localhost on port 6379, using database 0.
Where can i find this configuration file. Is this in setting.py of my project.

Django/Celery: celeryd won't retrieve new tasks from queue unless restarted

I cannot figure out why celeryd is not pulling new tasks that are added to the queue. It will only retrieve tasks once it is started, then fails to monitor thereafter. I am running the Django development server with django-celery using the Django ORM for the message broker.
My Django settings file has this configuration for celery:
INSTALLED_APPS += ("djcelery", )
INSTALLED_APPS += ("djkombu", )
import djcelery
djcelery.setup_loader()
BROKER_TRANSPORT = "djkombu.transport.DatabaseTransport"
BROKER_HOST = "localhost"
BROKER_PORT = 5672
BROKER_USER = "guest"
BROKER_PASSWORD = "guest"
BROKER_VHOST = "/"
Furthermore if it matters, I am using a remote MySQL server as my database backend.
Edit: Resolved!
After spending several hours on this, I found an answer in the Celery FAQ:
MySQL is throwing deadlock errors, what can I do?
I wasn't seeing the deadlock errors in the log, but modifying my /etc/my.cnf to include:
[mysqld]
transaction-isolation = READ-COMMITTED
resolved my issue.
Hope this helps someone else!
After spending several hours on this, I found an answer in the Celery FAQ:
MySQL is throwing deadlock errors, what can I do?
I wasn't seeing the deadlock errors in the log, but modifying my /etc/my.cnf to include:
[mysqld]
transaction-isolation = READ-COMMITTED
resolved my issue.
Hope this helps someone else!