Most of the places i see
from celery import Celery
os.environ.setdefault('DJANGO_SETTINGS_MODULE', 'mysite.settings.local') --> ??
app = Celery('mysite')
app.config_from_object('django.conf:settings', namespace='CELERY')
app.autodiscover_tasks()
what is the use of exporting local settings in project, i have seen this in many projects in production we are using local settings, although local mostly inherit base settings where all celery config is defined, by why not mysite.settings.production ?
os.environ.setdefault will first look to content of DJANGO_SETTINGS_MODULE environment variable if not found set it to default value
You don't want to have hassle of setting DJANGO_SETTINGS_MODULE environment variable on every development machine, but in production you will set this variable to production config.
Related
I've setted up a Django project using settings modules, a base.py, development.py, and production.py but I'm a little confused on how to run the project locally and remotely.
So, I can run python manage.py runserver --settings=<project>.settings.development and everything goes alright but my doubt is: How can I run it through heroku local giving the settings option? It will also help to run it on remotely with production settings.
You can make use of Environment variables
In __init__.py of your settings directory add:
import os
from .base import *
environment = os.environ.get("ENV")
if environment == development:
from .development import *
else:
from .production import *
this will load the development settings module if the value of ENV is set to development, otherwise by default it will load the production settings module.
Configuring environment variables in Heroku
I want to make specific settings for each environment (local vs staging). I set up Config Vars in my heroku staging app and set DEBUG setting to false to try it out, but it didn't work. Am I missing something or making it wrong?
My seetings.py file
Config Vars in the staging app
Result when I tried somthing wrong
You should create a directory where your current settings.py file is located and name it settings. Then create a base.py, dev.py, and prod.py file in this directory.
Also create an __init__.py in the same location as these 3 settings files and inside that __init__.py put from your_project_name.settings.base import *. In base.py you'll have all the shared settings between prod and dev, and in prod.py and dev.py you would just from .base import * to 'inherit' the settings from the base.py file. This is one of the only cases where it's recommended to import like this.
Then you can set the DJANGO_SETTINGS_MODULE environment variable in production to use my_project_name.settings.prod instead of the default settings variable.
DEBUG in the settings file needs to be set via the environment variable, if available.
So change DEBUG = True to DEBUG = os.environ.get('DEBUG', True) and you should be fine. This is usually called a feature flag (pattern).
Responding:
If you are using a "two scoops" pattern, #wjh18 is on the right path.
The pattern I outlined is solid, in use for years.
Can you see what the python terminal grabs on Heroku via heroku run bash --app APPNAME, then python then import os then os.environ.get('DEBUG'). The should match your settings on Heroku. If so, there may be something in the stack that is inhibiting settings (lazy load) from working correct.
A number of gotcha exist in Django is you deviate from established patterns.
Just in case, the env var is ONLY for the Django settings page, otherwise access the Django DEBUG via proper import of settings (from django.conf import settings).
My Django project has multiple settings file for development, production and testing. And I am using supervisor to manage the celery worker. My question how to load the settings file for celery based on the environment I am in.
By using environment variables. Let's say you have the following setting files at the root of your repository.
config.settings.development.py
config.settings.production.py
...
The recommended way to have your celery instance defined is in your config like celery.py module:
from __future__ import absolute_import, unicode_literals
import os
from celery import Celery
# set the default Django settings module for the 'celery' program.
# os.environ.setdefault('DJANGO_SETTINGS_MODULE', 'config.settings.production')
app = Celery('proj')
# Using a string here means the worker doesn't have to serialize
# the configuration object to child processes.
# - namespace='CELERY' means all celery-related configuration keys
# should have a `CELERY_` prefix.
app.config_from_object('django.conf:settings', namespace='CELERY')
# Load task modules from all registered Django app configs.
app.autodiscover_tasks()
#app.task(bind=True)
def debug_task(self):
print('Request: {0!r}'.format(self.request))
Instead of setting the DJANGO_SETTINGS_MODULE variable within the module (I have commented that out) make sure that those are present in the environment at the time that supervisord is started.
To set those variables in your staging, testing, and production system you can execute the following bash command.
E.g. on your production system:
$ export DJANGO_SETTINGS_MODULE=config.settings.production
$ echo $DJANGO_SETTINGS_MODULE
I would also suggest you load them from an .env file. In my opinion thats more convenient. You could do that with for example python-dotenv.
Update
The .env file is mostly unique on your different systems and is usually not under source/version control. By unique I mean for development you may have a more verbose LOG_LEVEL or a different SECRET_KEY, because you don't want them to show up in your source control or want to be able to adjust them without modifying your source files.
So, in your base.py (production.py and development.py are inheriting) you can load the variables from the file with for example:
import os
from dotenv import load_dotenv
load_dotenv() # .env file has to be in the same directory
# ...
import os
DJANGO_SETTINGS_MODULE = os.getenv("DJANGO_SETTINGS_MODULE")
print(DJANGO_SETTINGS_MODULE)
# ...
I personally don't use the package since I use docker, which has a declarative way of defining an .env file but the code above should give you an idea how it could work. There are similar packages out there like django-environ which is featured in the book Two Scoops of Django. So I would tend to use this instead of python-dotenv, a matter of taste.
You likely want to configure different settings files. From here you have two options. You can use the django-admin settings param at runtime
django-admin runserver --settings=thecelery.settings
Also, you may have the option to configure settings in the settings.py. If you have a 1 settings file currently, this would require you to setup additional settings files, and set environment variables on the instance. Then in your base settings file, you can do soemthing like this
import os
environment your_env = os.environ["environment"]
if your_env == "celery":
os.environ.setdefault("DJANGO_SETTINGS_MODULE", "thecelerysettings")
else:
os.environ.setdefault("DJANGO_SETTINGS_MODULE", "defaultsettings")
I'm having trouble understanding the switchover from local to production settings for deploying Django projects. I'm using an Ubuntu virtual machine (VM) if it matters.
I understand how to configure my settings. I understand best practices for creating settings files (base.py, local.py, production.py, blah, blah). I know that in local development DEBUG=True, in production DEBUG=False, blah, blah.
But how do I implement this switchover in deployment? Do I get rid of the local.py? Do I create some kind of logic so that my VM only reads base.py and production.py?
What's the best approach?
I'm not sure about the best approach but what I do works...
At the foot of my settings.py, I have:
try:
from local_settings import *
except ImportError, e:
pass
I keep all my local development settings in local_settings.py which overide any production settings. I also make sure not to upload my local_settings.py file!
What you could do is to check in your settings py which environment is used at the moment.
To do this you can set an environment variable on your system that would have diffrent values on your development environment and the production environment.
you can set these environment variables by
sudo -H gedit /etc/environment
and add the following line in the file:
DEBUG="true"
(to make this changes available you will have to log out and log back in into your system)
in the production environment you would then set DEBUG="false".
then you can do this in your settings.py:
DEBUG = os.environ.get('DEBUG', 'true') != 'false'
and then you can set every setting that would be different depending on the environment that is used like this:
if DEBUG:
STATICFILES_STORAGE = 'whitenoise.storage.CompressedManifestStaticFilesStorage'
else:
STATICFILES_STORAGE = STATICFILES_STORAGE = 'custom_storages.StaticStorage'
(the setting above uses the local django server to serve staticfiles if it is in the development environment, and amazon s3 with boto if in the production environment (which is defined in the custom_storages module)
This way you can push your updates and always the right settings should be picked up depending on the environment.
I'm trying to separate out Django's secret key and DB pass into environmental variables, as widely suggested, so I can use identical code bases between local/production servers.
The problem I am running into is correctly setting and then reading the environmental vars on the production server running Apache + mod_wsgi.
Vars set in my user profile aren't available because Apache isn't run as that user. Vars set in the Virtual Hosts file with SetEnv are not available because the scope is somehow different.
I've read a couple 1,2 of SO answers, which leads to this blog with a solution.
I can't figure out how to apply the solution to current versions of Django which use a wsgi.py file, which looks like:
import os
os.environ.setdefault("DJANGO_SETTINGS_MODULE", "project.settings")
from django.core.wsgi import get_wsgi_application
application = get_wsgi_application()
How can I apply that blog solution to the wsgi.py file, or is there a better place to store env-vars where Django can get at them?
If anyone else is frustrated by Graham's answer, here is a solution that actually works for the original question. I personally find setting environmental variables from Apache to be extremely useful and practical, especially since I configure my own hosting environment and can do whatever I want.
wsgi.py (tested in Django 1.5.4)
from django.core.handlers.wsgi import WSGIHandler
class WSGIEnvironment(WSGIHandler):
def __call__(self, environ, start_response):
os.environ['SETTINGS_CONFIG'] = environ['SETTINGS_CONFIG']
return super(WSGIEnvironment, self).__call__(environ, start_response)
application = WSGIEnvironment()
Of minor note, you lose the future-proof method of django.core.wsgi.get_wsgi_application, which currently only returns WSGIHandler(). If the WSGIHandler.__call__ method is ever updated and you update Django also, you may have to update the WSGIEnvironment class if the arguments change. I consider this a very small penalty to pay for the convenience.
FWIW. Relying on environment variables for fine grained configuration settings are in general not a good idea. This is because not all WSGI hosting environments or commercial PaaS offerings support the concept. Using environment variables for fine grained settings can also effectively lock you into a specific PaaS offering where you have directly embedded a lookup of a specifically named environment variable directly into your code, where the naming convention of that environment variable is specific to that hosting service. So although use of environment variables is pushed by certain services, always be careful of being dependent on environment variables as it will reduce the portability of your WSGI application and make it harder to move between deployment mechanisms.
That all said, the blog post you mention will not usually help. This is because it is using the nasty trick of setting the process environment variables on each request based on the per request WSGI environ settings set using SetEnv in Apache. This can cause various issues in a multi threading configuration if the values of the environment variables can differ based on URL context. For the case of Django, it isn't helpful because the Django settings module would normally be imported before any requests had been handled, which means that the environment variables would not be available at the time required.
This whole area of deployment configuration is in dire need of a better way of doing things, but frankly it is mostly a lost cause because hosting services will not change things to accommodate a better WSGI deployment strategy. They have done their work, have their customers locked into the way they have done it already and aren't about to create work for themselves and change things even if a better way existed.
Anyway, 'all problems in computer science can be solved by another level of indirection'. (http://en.wikipedia.org/wiki/Indirection) and that is what you can do here.
Don't have your application lookup environment variables. Have it import a deployment specific Python configuration module which contains a means of using an API to get the configuration settings. This configuration module would implement different ways of getting the actual settings based on the deployment mechanism. In some cases it could grab the values from environment variables. For other such as with Apache/mod_wsgi, the values could be in that configuration module, or read from a separate configuration file which could be an ini, json or yaml format. In providing a API, it can also map names of configuration settings to different names used by different PaaS offerings.
This configuration module doesn't need to be a part of your application code, but could be manually place into a subdirectory of '/etc/' on the target system. You just then need to set the Python module search path so your application can see it. The whole system could be made quite elegant as part of a wider better standard for WSGI deployment, but as I said, little incentive to do the hard work of creating such a thing when existing PaaS offerings are highly unlikely to change to use such a standard.
Here's an alternative solution that's as future-proof as get_wsgi_application. It even lets you set environment variables to use in your Django initialization.
# in wsgi.py
KEYS_TO_LOAD = [
# A list of the keys you'd like to load from the WSGI environ
# into os.environ
]
def loading_app(wsgi_environ, start_response):
global real_app
import os
for key in KEYS_TO_LOAD:
try:
os.environ[key] = wsgi_environ[key]
except KeyError:
# The WSGI environment doesn't have the key
pass
from django.core.wsgi import get_wsgi_application
real_app = get_wsgi_application()
return real_app(wsgi_environ, start_response)
real_app = loading_app
application = lambda env, start: real_app(env, start)
I'm not 100% clear how mod_wsgi manages its processes, but I assume it doesn't re-load the WSGI app very often. If so, the performance penalty from initializing Django will only happen once, inside the first request.
Alternatively, if you don't need to set the environment variables before initializing Django, you can use the following :
# in wsgi.py
KEYS_TO_LOAD = [
# A list of the keys you'd like to load from the WSGI environ
# into os.environ
]
from django.core.wsgi import get_wsgi_application
django_app = get_wsgi_application()
def loading_app(wsgi_environ, start_response):
global real_app
import os
for key in KEYS_TO_LOAD:
try:
os.environ[key] = wsgi_environ[key]
except KeyError:
# The WSGI environment doesn't have the key
pass
real_app = django_app
return real_app(wsgi_environ, start_response)
real_app = loading_app
application = lambda env, start: real_app(env, start)
For Django 1.11:
Apache config:
<VirtualHost *:80 >
...
SetEnv VAR_NAME VAR_VALUE
</VirtualHost>
wsgi.py:
import os
import django
from django.core.handlers.wsgi import WSGIHandler
class WSGIEnvironment(WSGIHandler):
def __call__(self, environ, start_response):
os.environ["VAR_NAME"] = environ.get("VAR_NAME", "")
return super(WSGIEnvironment, self).__call__(environ, start_response)
os.environ.setdefault("DJANGO_SETTINGS_MODULE", "project.settings")
django.setup(set_prefix=False)
application = WSGIEnvironment()
I ran into the same problem of separating production and development code while tracking both of them into version control. I solved the problem using different wsgi scripts, one for production server and one for development server. Create two different setting files as mentioned here. And reference them in wsgi scripts. For example the following is wsgi_production.py file
...
import os
os.environ.setdefault("DJANGO_SETTINGS_MODULE", "project.settings.production")
...
and in wsgi_development.py file
...
import os
os.environ.setdefault("DJANGO_SETTINGS_MODULE", "project.settings.development")
...