I'm deploying a Django app which uses celery task and has redis as the broker backend. I'm using docker for deployment and my production server is an amazon aws instance. The problem I'm facing is that the django settings is different for localhost:
BROKER_URL = 'redis://localhost:6379'
CELERY_RESULT_BACKEND = 'redis://localhost:6379'
and all my unit tests work. For docker it fails unless I change it to
BROKER_URL = 'redis://redis:6379'
CELERY_RESULT_BACKEND = 'redis://redis:6379'
My question is, how do I identify the redis broker url in my deployment server? Will it be redis://redis:6379?
PS: For heroku server there's an add-on for identifying the redis url call REDISTOGO_URL. Is there something similar for amazon aws server?
BROKER_URL = 'redis://localhost:6379'
CELERY_RESULT_BACKEND = 'redis://localhost:6379'
The above implies that both redis and celery are running on localhost, the same machine on which your django app is running.
Please check:
1) Redis is installed on the server, and is running. (sudo service redis-server start)
2) Celery is installed on the server, and is running.
BROKER_URL = 'redis://redis:6379'
CELERY_RESULT_BACKEND = 'redis://redis:6379'
If you are using docker, the above implies that there is another docker container which is running redis, and your code container is linked to the redis container with the alias 'redis'
Related
Locally, it works. Socketio upgrades to websocket instead of resorting to polling.
This is obvious from the logs:
...
FYnWEW0ufWGO7ExdAAAA: Received request to upgrade to websocket
FYnWEW0ufWGO7ExdAAAA: Upgrade to websocket successful
...
Upon deploying the application, it partially works when I create a procfile with the content:
web: gunicorn app:app
The issue here is that socketio fails to upgrade to websocket and therefore resorts to polling.
Here is a gif showcasing that it in production doesn't upgrade to websockets and resorts to spamming pollings instead
My file structure is
wsgi.py
app.py
Procfile
requirements.txt
This is how I initialize socketio
app = ...
socketio = SocketIO(app,
logger=True,
engineio_logger=True,
cors_allowed_origins="*"
)
if __name__ == "__main__":
socketio.run(app, debug=False, port=5000)
Notice Im not setting async_mode, which was the issue for this SO-question
How do I deploy my flask app with socketio to Heroku and have it upgrade to websockets?
I think the issue is that Im just not using the right procfile command to start the application in deployment.
Having a procfile with the content
web: gunicorn --worker-class eventlet -w 1 wsgi:app
Did the job.
Also, it is important that your dyno is set to "ON"
I have a Django application deployed on a K8s cluster. I need to send some emails (some are scheduled, others should be sent asynchronously), and the idea was to delegate those emails to Celery.
So I set up a Redis server (with Sentinel) on the cluster, and deployed an instance for a Celery worker and another for Celery beat.
The k8s object used to deploy the Celery worker is pretty similar to the one used for the Django application. The main difference is the command introduced on the celery worker: ['celery', '-A', 'saleor', 'worker', '-l', 'INFO']
Scheduled emails are sent with no problem (celery worker and celery beat don't have any problems connecting to the Redis server). However, the asynchronous emails - "delegated" by the Django application - are not sent because it is not possible to connect to the Redis server (ERROR celery.backends.redis Connection to Redis lost: Retry (1/20) in 1.00 second. [PID:7:uWSGIWorker1Core0])
Error 1:
socket.gaierror: [Errno -5] No address associated with hostname
Error 2:
redis.exceptions.ConnectionError: Error -5 connecting to redis:6379. No address associated with hostname.
The Redis server, Celery worker, and Celery beat are in a "redis" namespace, while the other things, including the Django app, are in the "development" namespace.
Here are the variables that I define:
- name: CELERY_PASSWORD
valueFrom:
secretKeyRef:
name: redis-password
key: redis_password
- name: CELERY_BROKER_URL
value: redis://:$(CELERY_PASSWORD)#redis:6379/1
- name: CELERY_RESULT_BACKEND
value: redis://:$(CELERY_PASSWORD)#redis:6379/1
I also tried to define CELERY_BACKEND_URL (with the same value as CELERY_RESULT_BACKEND), but it made no difference.
What could be the cause for not connecting to the Redis server? Am I missing any variables? Could it be because pods are in a different namespace?
Thanks!
Solution from #sofia that helped to fix this issue:
You need to use the same namespace for the Redis server and for the Django application. In this particular case, change the namespace "redis" to "development" where the application is deployed.
I am trying to set up my django project to work with Celery and Redis. I have no issues running it locally, but I can't get it working in the production server.
My hosting recommends to setup Redis using unixsocket and run redis in a screen:
port 0
unixsocket /path/redis.sock
This all works, when I run redis I get:
* The server is now ready to accept connections at /here-comes-my-path/redis.sock
Now I have issues:
How do I verify the connection? redis-cli -p 0 returns
Could not connect to Redis at 127.0.0.1:0: Can't assign requested address
not connected>
How do I start celery worker? Running celery -A rbwpredictor worker -l info
returns (I've xed sensitive data):
Traceback (most recent call last):
File "/usr/home/xxxx/.virtualenvs/xxxx/bin/celery", line 6, in
from celery.main import main
File "/home/xxx/domains/xxxx/public_python/xxxx/celery.py", line 6, in
from celery import Celery
ImportError: cannot import name 'Celery'
My Celery settings in settings.py:
CELERY_RESULT_BACKEND = 'django-db'
CELERY_STATUS_PENDING = 'PENDING'
CELERY_STATUS_STARTED = 'STARTED'
CELERY_STATUS_RETRY = 'RETRY'
CELERY_STATUS_FAILURE = 'FAILURE'
CELERY_STATUS_SUCCESS = 'SUCCESS'
As mentioned above locally everything works fine, it's the configuration on the server I struggle with.
You have configured redis to communicate through unix socket not through standard port
To connect with redis-cli you can use
redis-cli -s /here-comes-my-path/redis.sock
And you should reconfigure redis.conf or just set BROKER_URL
BROKER_URL = 'redis+socket:///here-comes-my-path/redis.sock'
I wanted to use redis with dokku and flask. First issue was installing current version of dokku, i am using latest version from repo now.
Second problem is showing in Flask debugger:
redis.exceptions.ConnectionError
ConnectionError: Error 111 connecting to None:6379. Connection refused.
I set redis url and port in Flask:
app.config['REDIS_URL'] = 'IP:32768'
-----> Checking status of Redis
remote: Found image redis/landing
remote: Checking status...stopped.
remote: Launching redis/landing...COMMAND: docker run -v /home/dokku/.redis/volume-landing:/var/lib/redis -p 6379 -d redis/landing /bin/start_redis.sh
-----> Setting config vars
REDIS_URL: redis://IP:6379
REDIS_IP: IP
REDIS_PORT: 6379
Any idea? REDIS_URL should be set in different way?
This code works ok in localhost:
https://github.com/kwikiel/bounce
(with ['REDIS_IP'] = '172.17.0.13' set to 127.0.0.1)
Problem appears when i try to connect with redis dokku.
Steps to use redis with flask and dokku:
Install redis plugin:
cd /var/lib/dokku/plugins
git clone https://github.com/ohardy/dokku-redis redis
dokku plugins-install
Link your redis container to application container
dokku redis:create [name of app container]
You will receive info about environmental variables that you will have to set - for example:
Host: 172.17.0.91
Public port: 32771
Then set these settings in Flask (or other framework)
app.config['REDIS_URL'] = 'redis://172.17.0.91:6379'
app.config['REDIS_IP'] = '172.17.0.91'
app.config['REDIS_PORT'] = '6379'
Complete example of redis database used with Flask app (A/B testing in Flask):
https://github.com/kwikiel/bounce
According to celery using redis docs
Configuration
Configuration is easy, just configure the location of your Redis database:
BROKER_URL = 'redis://localhost:6379/0'
Where the URL is in the format of:
redis://:password#hostname:port/db_number
all fields after the scheme are optional, and will default to localhost on port 6379, using database 0.
Where can i find this configuration file. Is this in setting.py of my project.