How to run the qcluster process in production (Django-q)? - django

I have a Django webapp. The app has some scheduled tasks, for this I'm using django-q. In local development you need to run manage.py qcluster to be able to run the scheduled tasks.
How can I automatically run the qcluster process in production?
I'm deploying to a Digital Ocean droplet, using ubuntu, nginx and gunicorn.

Are you using a Procfile?
My configuration is to have a Procfile that contains:
web: python ./manage.py runserver 0.0.0.0:$PORT
worker: python ./manage.py qcluster
This way, every time the web process is started, another process for django-q is also created.

Related

populating django database using migrations

I created an api that allows to read some data with filters based on the models I created.
and launch the database is empty so I created a script to populate the database.
I then dockerized the app and deployed to AWS.
the issue here is that everytime the container restarts the script is re-run.
I would like to use migrations to do this.
how?
for now the docker entrypoint is
python manage.py wait_for_db
python manage.py makemigrations API
python manage.py migrate
# python import_csv.py
uwsgi --socket :9000 --workers 4 --master --enable-threads --module myapi.wsgi

Django: Do i have to restart celery beat, celery worker and Django gunicorn when new changes are uploaded to production server

I have a production server running Django application
Django server is run using gunicorn and nginx
pipenv run gunicorn --workers=1 --threads=50 --bind 0.0.0.0:8888 boiler.wsgi:application
celery worker is run using
pipenv run celery -A boiler worker
celery beat is run using
pipenv run celery -A boiler beat
Now i have updated my model and few views on my production server (i.e pulled some changes using github)
Now inorder to reflect the changes should i restart all celery beat,celery worker and Django server gunicorn
or only celery worker and Django server gunicorn is sufficient
or only Django server gunicorn is sufficient
If you have made changes to any code that in one way or the other affects the celery tasks then yes, you should restart the celery worker. If you are not sure, a safe bet is to restart. And since celery beat tracks the scheduling of periodic tasks you should also restart it if you restart the workers. Of course, you should ensure there are no current tasks running or properly kill them before restarting. You can monitor the tasks using Flower

Having trouble getting django heroku to run properly. How do I resolve error code=H14 desc="No web processes running"?

I have previously ran this app on Heroku without issues. But it had been around 6 months since I deployed and I also switched computers from a Linux to a Windows machine.
Now when I deploy, the deployment is successful but the service does not work. When I check the logs the error is:
code=H14 desc="No web processes running"
I have not changed the Procfile or the requirements.txt since it had been working
requirements.txt:
django
gunicorn
django-heroku
requests
djangorestframework
django-cors-headers
flask-sslify
Procfile:
release: python manage.py migrate
web: gunicorn findtheirgifts.wsgi --log-file -
wsgi.py
"""
WSGI config for findtheirgifts project.
It exposes the WSGI callable as a module-level variable named ``application``.
For more information on this file, see
https://docs.djangoproject.com/en/2.0/howto/deployment/wsgi/
"""
import os
from django.core.wsgi import get_wsgi_application
os.environ.setdefault("DJANGO_SETTINGS_MODULE", "findtheirgifts.settings")
application = get_wsgi_application()
I have tried some suggestions from similar threads
heroku ps:restart
heroku buildpacks:clear
heroku ps:scale web=1
None of which seemed to change anything. Any help on this would be greatly appreciated!
As the error indicates, your app doesn't have any web process running.
You can see all running processes from the CLI with the following command:
heroku ps -a <your app name>
And scale your web process to 1 with the following:
heroku ps:scale web=1 -a <your app name>
Which will start one instance of your app.
See the Heroku Scaling Documentation.
Turns out in initialized the heroku git file in a different directory than I had previously. Causing the app to not find the Procfile.

makemigrations or migrate while server is runnig

I'm wondering how we can handle database migration in django while the site in production as while developing we stop the server then make changes in database then rerun the server I think it may be stupid question but I am learning by myself and can't figure it out thanks in advance.
You can connect to the server using ssh and run commands to migrate without stopping the server and once you are done, you restart the server.
python manage.py makemigrations
and then
python manage.py migrate
and then restart the server.
for example: in case of nginx and gunicorn
sudo service gunicorn restart
sudo service nginx restart

Django Manage.py Migrate from Google Managed VM Dockerfile - How?

I'm working on a simple implementation of Django hosted on Google's Managed VM service, backed by Google Cloud SQL. I'm able to deploy my application just fine, but when I try to issue some Django manage.py commands within the Dockerfile, I get errors.
Here's my Dockerfile:
FROM gcr.io/google_appengine/python
RUN virtualenv /venv -p python3.4
ENV VIRTUAL_ENV /venv
ENV PATH /venv/bin:$PATH
# Install dependencies.
ADD requirements.txt /app/requirements.txt
RUN pip install -r /app/requirements.txt
# Add application code.
ADD . /app
# Overwrite the settings file with the PROD variant.
ADD my_app/settings_prod.py /app/my_app/settings.py
WORKDIR /app
RUN python manage.py migrate --noinput
# Use Gunicorn to serve the application.
CMD gunicorn --pythonpath ./my_app -b :$PORT --env DJANGO_SETTINGS_MODULE=my_app.settings my_app.wsgi
# [END docker]
Pretty basic. If I exclude the RUN python manage.py migrate --noinput line, and deploy using the GCloud tool, everything works fine. If I then log onto the VM, I can issue the manage.py migrate command without issue.
However, in the interest of simplifying deployment, I'd really like to be able to issue Django manage.py commands from the Dockerfile. At present, I get the following error if the manage.py statement is included:
django.db.utils.OperationalError: (2002, "Can't connect to local MySQL server through socket '/cloudsql/my_app:us-central1:my_app_prod_00' (2)")
Seems like a simple enough error, but it has me stumped, because the connection is certainly valid. As I said, if I deploy without issuing the manage.py command, everything works fine. Django can connect to the database, and I can issue the command manually on the VM.
I wondering if the reason for my problem is that the sql proxy (cloudsql/) doesn't exist when the Dockerfile is being deployed. If so, how do I get around this?
I'm new to Docker (this being my first attempt) and newish to Django, so I'm unsure of what the correct approach is for handling a deployment of this nature. Should I instead be positioning this command elsewhere?
There are two steps involved in deploying the application.
In the first step, the Dockerfile is used to build the image, which can happen on your machine or on another machine.
In the second step, the created docker image is executed on the Managed VM.
The RUN instruction is executed when the image is being built, not when it's being run.
You should move manage.py to the CMD command, which is run when the image is being run.
CMD python manage.py migrate --noinput && gunicorn --pythonpath ./my_app -b :$PORT --env DJANGO_SETTINGS_MODULE=my_app.settings my_app.wsgi