There are 3 ways to run a django application with gunicorn:
Standard gunicorn + wsgi (ref django doc)
gunicorn project.wsgi:application
Using gunicorn django integration (ref gunicorn doc and django doc):
python manage.py run_gunicorn
Using gunicorn_django command (ref gunicorn doc)
gunicorn_django [OPTIONS] [SETTINGS_PATH]
Django's documentation suggests using 1., which is not even listed as an option on Gunicorn documentation.
Is there any best practice on the best way to run a django app with gunicorn, and what are the foreseable advantages/disadvantages of these different solutions?
Taking a glimpse at gunicorn's code it looks like they pretty much all do the same: 2. seems to be creating a wsgi app using django's internals, and 3. uses 2.
If that's the case, I wouldn't even understand what's the reason for not simply using "1." all the time, especially since a wsgi.py file is autocreated for you since django 1.4; if that's true maybe simply a documentation improvement should be suggested...
Also, best practice for gunicorn settings with django would be great. Using 1., does it make sense to set some defaults in the wsgi file and avoid additional settings?
References:
Should I use django-gunicorn integration or wsgi? only concerns choices 1. and 3., there's no hint for the settings and the answer gives no rationale
Deploying Django with gunicorn and nginx give some broader information but is not strictly related nor answer this question
Django Gunicorn wsgi about version "4", which is launching gunicorn -c configfile and configfile will point to django_settings to django
Django WSGI and Gunicorn is just a bit confusing :) mixing up 1. and 3. Of course wsgi.py is used only with 1.
After checking out I'd say that the best way is using gunicorn + wsgi
$ gunicorn project.wsgi:application
It's now both confirmed in gunicorn docs: if you run Django 1.4 or newer, it’s highly recommended to simply run your application with the WSGI interface using the gunicorn command and django as linked above.
It also avoids adding gunicorn as installed app, which means it's not a requirement to install gunicorn to test your app which might be useful from time to time.
About Settings
The Django settings file to be used can be passed through an ENV variable, or customized in the wsgi.py file. I sometimes create several wsgi.py files if I have multiple settings (eg. multiple websites) that have to run from the same project - See Django Doc for more info.
A one-liner solution that does not require any new file from Carl's comment:
DJANGO_SETTINGS_MODULE=project.settings.prod gunicorn project.wsgi:application
sounds like a nicer way (though I'll probably end up writing it in some shell commands to make it easy to "remember").
Gunicorn settings can be passed as -c settings_file, but I'm exploring other ways and will try to update this answer if I find any. Using environment variables seems a workaroud, but only for limited cases
In particular it would be nice to get/share some settings between django and gunicorn; gunicorn documentation says:
Currently, only Paster applications have access to framework
specific settings. If you have ideas for providing settings to WSGI
applications or pulling information from Django’s settings.py feel
free to open an issue to let us know.
(Update: haven't found any smarter way, but after all env variables are enough for my most-common cases).
I guess using run_gunicorn is the way to go, it's also the simplest way to use it.
It's basically the same as usign gunicorn project.wsgi:application but needs gunicorn to be added to INSTALLED_APPS so that django recognizes the run_gunicorn command, therefore it's probably not the default way...
Using gunicorn_django is more or less deprecated, as the documentation also states here...
Found a solution that works both for manage.py (on local machine) and with gunicorn
create a file that has all the evniroment stuff
# mysite/settings/set_env.py
import os
_environment = os.getenv('environment', None)
if _environment == "production":
os.environ.setdefault("DJANGO_SETTINGS_MODULE", "mysite.settings.production")
elif _environment == "development":
os.environ.setdefault("DJANGO_SETTINGS_MODULE", "mysite.settings.development")
else:
os.environ.setdefault("DJANGO_SETTINGS_MODULE", "mysite.settings.local")
and import this file in your manage.py and wsgi.py files, no need to change anything else in gunicorn execution
Related
Reading the daemonization Celery documentation, if you're running Celery on Linux with systemd, you set it up with two files:
/etc/systemd/system/celery.service
/etc/conf.d/celery
I'm using Celery in a Django site with django-celery-beat, and the documentation is a little confusing on this point:
Example Django configuration
Django users now uses [sic] the exact same template as above, but make sure that the module that defines your Celery app instance also sets a default value for DJANGO_SETTINGS_MODULE as shown in the example Django project in First steps with Django.
The docs don't just come out and say, put your daemonization settings in settings.py and it will all work out, bla, bla. From another SO posts this user seems to have run into the same confusion where Django instructions imply you use init.d method.
Bonus point if you can answer if it's possible to run Celery and RabbitMQ both configured and with the Django instance (if that makes sense).
I'm thinking not Celery if only because daemon variables include CELERYD_ and first steps with django say: "...all Celery configuration options must be specified in uppercase instead of lowercase, and start with CELERY_"
I'm trying to understand how virtual environment gets invoked. The website I have been tasked to manage has a .venv directory. When I ssh into the site to work on it I understand I need to invoke it with source .venv/bin/activate. My question is: how does the web application invoke the virtual environment? How do I know it is using the .venv, not the global python?
More detail: It's a Drupal website with Django kind of glommed onto it. Apache it the main server. I believe the Django is served by gunicorn. The designer left town
_
Okay, I've found how, in my case, the virtualenv was being invoked for the django.
BASE_DIR/run/gunicorn script has:
#GUNICORN='/usr/bin/gunicorn'
GUNICORN=".venv/bin/gunicorn"
GUNICORN_CONF="$BASE_DIR/run/gconf.py"
.....
$GUNICORN --config $GUNICORN_CONF --daemon --pid $PIDFILE $MODULE
So this takes us into the .venv where the gunicorn script starts with:
#!/media/disk/www/aavso_apps/.venv/bin/python
Voila
Just use absolute path when calling python in virtualenv.
For example your virtualenv is located in /var/webapps/yoursite/env
So you must call it /var/webapps/yoursite/env/bin/python
If you use just Django behind a reverse proxy, Django will use whatever is the python environment for the user that started the server was determined in which python command. If you're using a management tool like Gunicorn, you can specify which environment to use in those configs. Although Gunicorn itself requires us to activate the virtual environment before invoking Gunicorn
EDIT:
Since you're using Gunicorn, take a look at this, https://www.digitalocean.com/community/tutorials/how-to-deploy-python-wsgi-apps-using-gunicorn-http-server-behind-nginx
I've followed the official Heroku docs on Django and Static Assets; I've installed dj-static and added it to my requirements.txt file, properly configured all the variables in my settings.py file:
STATIC_ROOT = os.path.join(CONFIG_ROOT, 'served/static/')
STATIC_URL = '/static/'
STATICFILES_DIRS = (
os.path.join(CONFIG_ROOT, 'static'),
)
And this is what my wsgi.py looks like:
import os
os.environ.setdefault("DJANGO_SETTINGS_MODULE", "my_django_project.settings")
from django.core.wsgi import get_wsgi_application
from dj_static import Cling
application = Cling(get_wsgi_application())
The contents of Procfile:
web: gunicorn --bind 0.0.0.0:$PORT my_django_project.wsgi:application
In the docs, it says that "collectstatic is run automatically when it is configured properly." But when I navigate to my site there's clearly no css.
I've tried debugging using heroku run, but that just copies the static files as expected.
I've noticed that when I include the collectstatic command in my Procfile, i.e.
web: python my_django_project/manage.py collectstatic --noinput ; gunicorn -b 0.0.0.0:$PORT my_django_project.wsgi:application
...that works as expected, and the static files are served.
However what's strange about that is when I run heroku run bash and look inside the directory that STATIC_ROOT is pointing to, there's nothing there! In fact, the entire served/ directory is missing, and yet, the static files are still being served!
I'd still like to know why isn't collectstatic being run automatically though -- like mentioned in the docs -- when I deploy my Django app to Heroku.
It looks like you might be using a specific settings module for Heroku/production. Further, you've set the environment variable DJANGO_SETTINGS_MODULE to point to this settings module (and that way, when the app runs, Django knows to use that one and not, say, your default/development one). Finally, you've probably configured static asset settings in Heroku/production settings module (perhaps, STATIC_ROOT).
Okay, so if this is all correct, then here is the issue: heroku environment variables are only set at serve-time, not at compile-time. This is important because collectstatic is a compile-time operation for Heroku. (Heroku goes through 2 stages when you push: 1) compiling, which involves setting the application up (collectstatic, syncdb, etc) 2) serving, the normal operation of your application).
So, essentially, you've done everything correctly, but Heroku hasn't exposed your environment variables, including your specification of a different settings module, to collectstatic.
To have your environment variables set to compile-time, enable Heroku's user-env-compile lab feature like this:
heroku labs:enable user-env-compile
I think this is a silly thing to do by default, and would be interested in hearing why Heroku thought it was a good idea.
Have you tried adding the user_env_compile setting to your heroku config?
heroku labs:enable user-env-compile
With that enabled collectstatic should be run whenever you deploy to heroku automatically.
I am using the heroku python buildpack with dokku, and collectstatic was not being run because it had no execute permission. They fixed that in a recent commit (Dec 13, 2013), so it should work now.
Django-celery seems to use the current django project's settings.py file to read the configuration for celery, and, correct me if I am wrong, there is no way to override this behavior. The problem is that my celery configuration is in a different file outside of the django project, and I need to somehow tell django-celery to use that file instead. How do I do this?
Generally, django-celery used for one project(but many applications). Of course, you can make symlink of the settings.py from one project to other and run Django-celery as:
./manage.py celeryd --settings=path.to.symlink
but in my opinion would be better decision to use celery as daemon with common settings and CELERY_IMPORTS to tasks from any django projects
I was working to deploy a website to heroku. The site seems to recognize my static files if I run with django's builtin server namely "runserver". However if I run with gunicorn The static files cannot be recognized. I was wondering if there's any special setting I need to tweak to magically make the recognition happen. Can anyone enlighten me how these two commands differ specifically or does it have anything to do with wsgi staff?
Thanks!!!
This is how I do with runserver in Procfile, which is quite neat.
web:python manage.py runserver
And here's what I do with gunicorn in Procfile, which is a mess
web: gunicorn some.dotted.path.to.mywsgi:application
UPDATE
Luckily I worked around this problem by including the following line to my urls.py. Though I know it's not a perfect solution because in real word, you need to shutdown DEBUG. but as for now in development. it's working well. Can anyone explain what this line magically do?
if settings.DEBUG:
urlpatterns += patterns('django.contrib.staticfiles.views',
url(r'^static/(?P<path>.*)$', 'serve'),
)
Serving static files on Heroku with Django is a bit tricky. Assuming you're using the 'staticfiles' app, you have to run 'collectstatic' to collect your static files after deploying. The problem with Heroku is that running 'collectstatic' in the shell will actually run in a new dyno, which disappears as soon as it's finished.
One potential solution is outlined here:
Basically, the idea is to combine a few commands in your Procfile so that 'collectstatic' is run during the dyno spinup process:
web: python my_django_app/manage.py collectstatic --noinput; bin/gunicorn_django --workers=4 --bind=0.0.0.0:$PORT my_django_app/settings.py
You also have to add the 'static' views to your urls.py (see https://docs.djangoproject.com/en/dev/howto/static-files/#serving-files-uploaded-by-a-user, but duplicate for STATIC_URL and STATIC_ROOT). It's worth noting that the Django docs recommend against using this in production.
This solution isn't ideal though, since you are still using your gunicorn process to serve static files. IMHO the best approach to dealing with static files on Heroku is to host them on something like S3.