Heroku dynos: can I technically use less dynos in my Django setup? - django

I'm using the following with my Django application:
Django channels
Celery with both regular and periodic tasks
Deployed on Heroku
My Procfile looks like this:
web: daphne artist_notify.asgi:channel_layer --port $PORT --bind 0.0.0.0 -v2
worker: python manage.py migrate --noinput && python manage.py runworker -v2
celerybeat: celery -A artist_notify beat -l info
celeryworker: celery -A artist_notify worker -l info
This combination seems to be expensive, and I'm wondering if I can make it better. I tried combining celerybeat and celeryworker (with &&) into one dyno called celery, like so:
celery: celery -A artist_notify beat -l info && celery -A artist_notify worker -l info
But that doesn't work, although other && combinations do work. I'm wondering if I can maybe combine the workers from worker and celeryworker?

Related

How to have two workers for heroku.yml?

I have the following heroku.yml. The 'containers' share the same Dockerfile:
build:
docker:
web: Dockerfile
celery: Dockerfile
celery-beat: Dockerfile
release:
image: web
command:
- python manage.py migrate users && python manage.py migrate
run:
web: python manage.py runserver 0.0.0.0:$PORT
celery: celery --app=my_app worker --pool=prefork --concurrency=4 --statedb=celery/worker.state -l info
celery-beat: celery --app=my_app beat -l info
I intended to have three containers, but it turns out that Heroku accepts only one web and the other should be workers.
So what do I modify at heroku.yml to have celery and celery-beat containers as worker?
UPDATE
I've changed the heroku.yml to the following, but Heroku keeps only the last worker (i.e. celery beat) and ignores the first worker:
build:
docker:
web: Dockerfile
release:
image: web
command:
- python manage.py migrate users && python manage.py migrate
run:
web: python manage.py runserver 0.0.0.0:$PORT
worker:
command:
- celery --app=my_app worker --pool=prefork --concurrency=4 --statedb=celery/worker.state -l info
image: web
worker:
command:
- celery --app=my_app beat -l info
image: web
What am I missing?
The name worker isn't really important:
No process types besides web and release have special properties
So just give them different names:
run:
web: python manage.py runserver 0.0.0.0:$PORT
celery_worker:
command:
- celery --app=my_app worker --pool=prefork --concurrency=4 --statedb=celery/worker.state -l info
image: web
celery_beat:
command:
- celery --app=my_app beat -l info
image: web
When you scale those processes, use the names celery_worker and celery_beat.
A better option is to combine the celery worker and beat in a single worker / command : (can only be done on Linux os)
run:
web: python manage.py runserver 0.0.0.0:$PORT
celery_worker:
command:
- celery --app=my_app worker --pool=prefork --concurrency=4 --statedb=celery/worker.state -l info --beat -l info
image: web

How to execute mulitple commands in single line linux (openshift / docker )

I'm looking for a way to run django server and celery in single line. The services (django and celery) are deployed in openshift as two separate pods with same image and currently i'm running django service (pod) using python manage.py runserver and celery (pod) using celery -A myapp worker --loglevel=info --concurrency=8
instead of running separate pods for each, i want to execute the runserver command and celery worker command together. How to do that.
I know && ; || is used for such scenarios. but those doesn't work.
for example :
cd ./app && python manage.py runserver #this works
cd ./app && python manage.py runserver && celery -A myapp worker --loglevel=info --concurrency=8
#this will cd to app, execute runserver command. but celery command doesn't get executed.
create a bash file and add in it the two commands like this :
python manage.py runserver &
celery -A myapp worker --loglevel=info --concurrency=8
make it executable with "chmod +x"
and run it in your docker container with
bash my_file.sh
The && joins two commands, and means the following command will run AFTER THE FIRST HAS FINISHED, and depending on it's exit code.
It's a logical and -- so, if the first command returns false, the second doesn't run (since the logical and is already known to evaluate as false).
Try this:
cd /path; ( python manage.py runserver &); ( celery -A myapp worker --loglevel=info --concurrency=8 &);
The final & puts celery in the background as well. Remove it if you want celery in the foreground. As noted above, this is not a "container native" design - if you have one process per container the rest of the management becomes significantly more straightforward. It's commonly worthwhile to figure out the rest of the adjustments needed (e.g. an emptyDir{} can be used to share files between containers within the pod) and simplify deployment sooner rather than later.

the proper way to run django rq in docker microservices setup

I have somehow bad setup of my docker containers I guess.
Because each time I run task from django I see in docker container output of ps aux that there is new process created of python mange.py rqworker mail instead of using the existing one.
See the screencast: https://imgur.com/a/HxUjzJ5
the process executed from command in my docker compose for rq worker container looks like this.
#!/bin/sh -e
wait-for-it
for KEY in $(redis-cli -h $REDIS_HOST -n 2 KEYS "rq:worker*"); do
redis-cli -h $REDIS_HOST -n 2 DEL $KEY
done
if [ "$ENVIRONMENT" = "development" ]; then
python manage.py rqworkers --worker-class rq.SimpleWorker --autoreload;
else
python manage.py rqworkers --worker-class rq.SimpleWorker --workers 4;
fi
I am new to docker and wondering a bit that this is started like this without deamonization... but is it a dockerish way of doing thing, right?
Here's what I do, with docker-compose:
version: '3'
services:
web:
build: .
image: mysite
[...]
rqworker:
image: mysite
command: python manage.py rqworker
[...]
rqworker_high:
image: mysite
command: python manage.py rqworker high
[...]
Then start with:
$ docker-compose up --scale rqworker_high=4

How to deploy Django with Channels and Celery on Heroku?

How one deploys the following stack on Heroku platform ?
Django
Django Channels
Celery
The limitation surely is on the Procfile.
To deploy Django with Celery it would be something like:
web: gunicorn project.wsgi:application
worker: celery worker --app=project.taskapp --loglevel=info
While deploying Django with Channels:
web: daphne project.asgi:channel_layer --port $PORT --bind 0.0.0.0 -v2
worker: python manage.py runworker -v2
The web process can use ASGI, but the worker process will be used by Channels and I don't see how Celery can be started alongside it.
You can have as many entries in the Procfile as you like. The only one that is special is "web", because that's the one Heroku expects to receive web requests, and the only one it will start automatically for you. You can use your own names for the rest:
web: gunicorn project.wsgi:application
celeryworker: celery worker --app=project.taskapp --loglevel=info
channelsworker: python manage.py runworker -v2
Now you can do heroku ps:scale celeryworker=1 and heroku ps:scale channelsworker=1 to start the other two processes.
See the Heroku Procfile docs for more information.

Django on Heroku - how can I get a celery worker to run correctly?

I am trying to deploy the simplest possible "hello world" celery configuration on heroku for my Django app. My Procfile is as follows:
web: gunicorn myapp.wsgi
worker: celery -A myapp worker -l info -B -b amqp://XXXXX:XXXXX#red-thistle-3.bigwig.lshift.net:PPPP/XXXXX
This is the RABBITMQ_BIGWIG_RX_URL that I'm giving to the celery worker. I have the corresponding RABBITMQ_BIGWIG_TX_URL in my settings file as the BROKER_URL.
If I use these broker URLs in my local dev environment, everything works fine and I can actually use the Heroku RabbitMQ system. However, when I deploy my app to Heroku it isn't working.
This Procfile seems to work (although Celery is giving me memory leak issues).
web: gunicorn my_app.wsgi
celery: celery worker -A my_app -l info --beat -b amqp://XXXXXXXX:XXXXXXXXXXXXXXXXXXXX#red-thistle-3.bigwig.lshift.net:PPPP/XXXXXXXXX