How to deploy Django with Channels and Celery on Heroku? - django

How one deploys the following stack on Heroku platform ?
Django
Django Channels
Celery
The limitation surely is on the Procfile.
To deploy Django with Celery it would be something like:
web: gunicorn project.wsgi:application
worker: celery worker --app=project.taskapp --loglevel=info
While deploying Django with Channels:
web: daphne project.asgi:channel_layer --port $PORT --bind 0.0.0.0 -v2
worker: python manage.py runworker -v2
The web process can use ASGI, but the worker process will be used by Channels and I don't see how Celery can be started alongside it.

You can have as many entries in the Procfile as you like. The only one that is special is "web", because that's the one Heroku expects to receive web requests, and the only one it will start automatically for you. You can use your own names for the rest:
web: gunicorn project.wsgi:application
celeryworker: celery worker --app=project.taskapp --loglevel=info
channelsworker: python manage.py runworker -v2
Now you can do heroku ps:scale celeryworker=1 and heroku ps:scale channelsworker=1 to start the other two processes.
See the Heroku Procfile docs for more information.

Related

cookiecutter-django production deploy with Gunicorn / uWSGI and Nginx

Why would I not be able to access port 8000 through Gunicorn, if I can through Django's development server.
The docs mentioned that production deploy with Gunicorn / uWSGI and Nginx has been successfully done, although no steps. I have been looking at external guides to do a production deploy of Django.
I can run this Gunicorn command and starts listening. I am not able to access the port 8000 for some reason. I can access port 8000 publicly when I do ./manage.py runserver 0.0.0.0:8000
gunicorn config.wsgi:application --bind 0.0.0.0:8000 --name project_django_cookie_cutter --user=$NON_ROOT_USER --group=$NON_ROOT_USER --log-level=debug
I am using the Django stack packaged by Bitnami.
I have been following this tutorial last since Gunicorn has not failed to run: https://realpython.com/development-and-deployment-of-cookiecutter-django-on-fedora/
I have tried going through these guides: https://docs.bitnami.com/google/infrastructure/django/get-started/deploy-django-project/
https://realpython.com/django-nginx-gunicorn/
https://www.pybloggers.com/2016/01/development-and-deployment-of-cookiecutter-django-on-fedora/
https://medium.com/analytics-vidhya/dajngo-with-nginx-gunicorn-aaf8431dc9e0

How to have two workers for heroku.yml?

I have the following heroku.yml. The 'containers' share the same Dockerfile:
build:
docker:
web: Dockerfile
celery: Dockerfile
celery-beat: Dockerfile
release:
image: web
command:
- python manage.py migrate users && python manage.py migrate
run:
web: python manage.py runserver 0.0.0.0:$PORT
celery: celery --app=my_app worker --pool=prefork --concurrency=4 --statedb=celery/worker.state -l info
celery-beat: celery --app=my_app beat -l info
I intended to have three containers, but it turns out that Heroku accepts only one web and the other should be workers.
So what do I modify at heroku.yml to have celery and celery-beat containers as worker?
UPDATE
I've changed the heroku.yml to the following, but Heroku keeps only the last worker (i.e. celery beat) and ignores the first worker:
build:
docker:
web: Dockerfile
release:
image: web
command:
- python manage.py migrate users && python manage.py migrate
run:
web: python manage.py runserver 0.0.0.0:$PORT
worker:
command:
- celery --app=my_app worker --pool=prefork --concurrency=4 --statedb=celery/worker.state -l info
image: web
worker:
command:
- celery --app=my_app beat -l info
image: web
What am I missing?
The name worker isn't really important:
No process types besides web and release have special properties
So just give them different names:
run:
web: python manage.py runserver 0.0.0.0:$PORT
celery_worker:
command:
- celery --app=my_app worker --pool=prefork --concurrency=4 --statedb=celery/worker.state -l info
image: web
celery_beat:
command:
- celery --app=my_app beat -l info
image: web
When you scale those processes, use the names celery_worker and celery_beat.
A better option is to combine the celery worker and beat in a single worker / command : (can only be done on Linux os)
run:
web: python manage.py runserver 0.0.0.0:$PORT
celery_worker:
command:
- celery --app=my_app worker --pool=prefork --concurrency=4 --statedb=celery/worker.state -l info --beat -l info
image: web

Heroku dynos: can I technically use less dynos in my Django setup?

I'm using the following with my Django application:
Django channels
Celery with both regular and periodic tasks
Deployed on Heroku
My Procfile looks like this:
web: daphne artist_notify.asgi:channel_layer --port $PORT --bind 0.0.0.0 -v2
worker: python manage.py migrate --noinput && python manage.py runworker -v2
celerybeat: celery -A artist_notify beat -l info
celeryworker: celery -A artist_notify worker -l info
This combination seems to be expensive, and I'm wondering if I can make it better. I tried combining celerybeat and celeryworker (with &&) into one dyno called celery, like so:
celery: celery -A artist_notify beat -l info && celery -A artist_notify worker -l info
But that doesn't work, although other && combinations do work. I'm wondering if I can maybe combine the workers from worker and celeryworker?

Django on Heroku - how can I get a celery worker to run correctly?

I am trying to deploy the simplest possible "hello world" celery configuration on heroku for my Django app. My Procfile is as follows:
web: gunicorn myapp.wsgi
worker: celery -A myapp worker -l info -B -b amqp://XXXXX:XXXXX#red-thistle-3.bigwig.lshift.net:PPPP/XXXXX
This is the RABBITMQ_BIGWIG_RX_URL that I'm giving to the celery worker. I have the corresponding RABBITMQ_BIGWIG_TX_URL in my settings file as the BROKER_URL.
If I use these broker URLs in my local dev environment, everything works fine and I can actually use the Heroku RabbitMQ system. However, when I deploy my app to Heroku it isn't working.
This Procfile seems to work (although Celery is giving me memory leak issues).
web: gunicorn my_app.wsgi
celery: celery worker -A my_app -l info --beat -b amqp://XXXXXXXX:XXXXXXXXXXXXXXXXXXXX#red-thistle-3.bigwig.lshift.net:PPPP/XXXXXXXXX

gunicorn on heruko binding to localhost, without messing up deployment to heruko

This is a spin off question from gunicorn on heroku: binding to localhost.
How can I get gunicorn to work both locally (using foreman) and for deployment on Heroku?
Procfile contains:
web: gunicorn mysite.wsgi
When I deploy locally using foreman, gunicorn binds to http://0.0.0.0:5000. I want it to bind to 127.0.0.1:8000. However, if I change to the Procfile to this:
web: gunicorn -b 127.0.0.1:8000 mysite.wsgi
Then I can't deploy to Heroku, the browser will return "application error"
$ heroku ps
=== web (1X): `gunicorn -b 127.0.0.1:8000 mysite.wsgi`
web.1: crashed 2013/08/22 23:45:04 (~ 1m ago)
Where is the default binding address set and/or what gunicorn options do I put in Procfile to get it to work on 127.0.0.1?
What could be unique to my situation that causes a deviant default setup (I'm working in mac OS - lion, maybe?)
Dont bind gunicorn to the local ip with. web: gunicorn -b 127.0.0.1:8000 mysite.wsgi in your procfile. This forces your django app to always use this local port whether or not its deployed locally or on Heroku's servers.
Using
web: gunicorn mysite.wsgi
in your procfile will make your application deploy at 127.0.0.1:8000 locally and 0.0.0.0:5000 on heroku's severs. I know you had to use the bind method in your previous question to get heroku to work locally, but that method is only covering an issue that isn't resolved.
Using foreman start with web: gunicorn mysite.wsgi should work, as told by the official docs (and my own experince :)).
Try just web: gunicorn mysite.wsgi in your procfile, deploy it to heroku and see if it works.
https://devcenter.heroku.com/articles/python