I am writing an automation script to intialize django projects depends on a list of questions answered by the user from console.
Answers will be put into environment variables, such as SERVER_ENVIRONMENT is either 'development' or 'production'
The problem I am facing is the environment variables don't stick by using os.environ['var'], which means next time I run the project, those variables are not there, so I need to persist those variables somewhere on the disk inside the project.
What's the best strategy to do so? Ideally it should be automatic, works both by runserver command and uwsgi processes.
Can you instead generate file and import it from the settings.py.
e.g. in settings.py put
import env_config
Where env_config.py will be generated by your script, will not be version-controlled and will have all the os.environ definitions (or just plain global variables?)
Related
What is the expected process when one wants to deploy a django website automatically by means of a continuous integration process: how can we set debug mode to false without editing the configuration file?
Generally there are three very common approaches to switch between production and development environments in Django applications:
Create a separate settings.py file with a different name and point to it using the DJANGO_SETTINGS_MODULE environment variable. Once setting a value to it, the expected settings file within the project folder will be ignored.
Use Python conditional statements to check for variables specific to your environment like if settings.DEBUG: and decide for other ones within that block of code.
Make a settings directory inside the project folder and create three additional files; one for common settings variables like common.py, and another two for local and production specific variables such as dev.py and prod.py. You can specify your __init__ to always import common.py and try to import one of the others if they are found.
Generally, you don't change anything in your code repository when deploying to production. It's the same code as on your local machine. The only difference is that your app server (gunicorn or uwsgi) is running with a different DJANGO_SETTINGS_MODULE environment variable.
My django app is communicating with external server and before running django server i would like load some config file. Variables from this file are going to be used by some module while app is running
Problem is that config file can be located in many places.
My dream would be to run manage.py --cfg "/path/to/cfg/file.cfg" or
manage.py runserver --cfg "/path/to/cfg/file.cfg"
and some variables (like globals?) are going to be loaded and they are going to be avaible for django modules to be used. After django server shutdown those variables can dissapear
Is there some nice way to accomplish this?
There seem to be two parts to your problem:
How do I support changing which set of variables (as defined in a config file) are used for a given run
How can I load these variables such that they are visible to all the modules of my application.
The standard mechanism for doing the 2nd is to put stuff in settings.py.
If you do FOO="bar" in settings.py, in your module you can do:
from django.conf import settings
if settings.FOO == "bar":
# Do something
As far as how you can support multiple configurations, the first thing I could come up with is to rename your real settings.py to be real_settings.py and then create a series of config1_settings.py, config2_settings.py, config3_settings.py ... which look like:
from real_settings.py import *
from path_to_configX.py import *
where configX.py has all the values for whatever variables you want for configuration X.
You would then start django's built in server via:
manage.py runserver --settings=configX_settings.py
Note that doing this for a production server (where you can't as easily just pass something on the command line to kick it off) may be a bit trickier, but you're going to need to provide more use case details for that.
Till now I was making change on my django production server (yes, really really bad :p ). I am wanna to go to a git process, and creating a local test server before deployement. So, I downloaded my python files, and ran a :
python manage.py runserver
hoping and prayed... but it was not enough, I got a nice error :
django.core.exceptions.ImproperlyConfigured: WSGI application 'issc.issc.wsgi.application' could not be loaded; Error importing module: 'No module named issc.wsgi'
I read in the documentation that [manage.py] is created automatically and sets up several key parts :
In addition, manage.py is automatically created in each Django project. manage.py is a thin wrapper around django-admin.py that takes care of several things for you before delegating to django-admin.py:
It puts your project’s package on sys.path.
It sets the DJANGO_SETTINGS_MODULE environment variable so that it points to your project’s settings.py file.
It calls django.setup() to initialize various internals of Django.
My question is : how can I manually set up these variables ? Because in my case I downloaded all the files on an arbitrary directory, but it was not enough. Eveything is here, but it is missing the link to this everything....
If you want to manually set the address of your config file, you can set DJANGO_SETTINGS_MODULE with the following:
export DJANGO_SETTINGS_MODULE='settings_to_load'
You can set all your environmental variables this way and it should work. I recommend to use a Virtualenv to this at least.
I'm setting up a Django project with different files for local and production settings. I can confirm that my Django secret key is successfully in an environment variable in virtualenv and when I do runserver I get no error. However when I try manage.py syncdb I get
django.core.exceptions.ImproperlyConfigured: The SECRET_KEY setting must not be empty.
I don't understand why I can successfully browse to the site after runserver but I can't sync the database. When I run env I can see that the secret key is there and in my base settings file (imported into local settings) I am doing this:
SECRET_KEY = os.environ.get('MY_SECRET_KEY')
Any help debugging this would be greatly appreciated.
Euan
I'm not sure why the runserver command is working while syncdb isn't, but you can sort it out by adding a environment variable for DJANGO_SETTINGS_MODULE in the same way you did for the SECRET_KEY. The only difference is that you don't need to reference DJANGO_SETTINGS_MODULE within the django code anywhere. I'm running my own setup in exactly that way and the only problem I run into is forgetting to change the settings module when I switch between projects :-)
EDIT: I didn't realise that you were adding --settings=myapp.settings.local to runserver as well as syncdb. The reason you need to do this is that you are using settings on a different path from the default so python can't find them. Also, although you set the DJANGO_SETTINGS_MODULE in the wsgi file, this is only fired when the site is accessed via your webserver. When running a manage command the wsgi file is ignored (AFAIK) so adding DJANGO_SETTINGS_MODULE to your environment variables in the same way as SECRET_KEY makes your settings file available to the manage command.
Hope that helps
Somewhat similar situation here, using virtualenv running an outer script which involves django models.
To make this work please make sure:
Your sys.path list has a path to your virtualenv site-packages. For me it's: sys.path.append('/home/user/.virtualenvs/Project/local/lib/python2.7/site-packages')
Your django settings variable is added to os.environ. Eg: os.environ.setdefault("DJANGO_SETTINGS_MODULE", "Project.settings")
Trying to use 2 different settings file for production and dev.
I set DJANGO_SETTINGS_MODULE='mysite.settings_production'
Works perfectly when running server with runserver
When I run it with apache though, apache doesn't seem to use the setting in the ~/.bash_profile and instead use os.environ.setdefault("DJANGO_SETTINGS_MODULE", "mysite.settings") in the wsgi.py file
I guess it's maybe because apache is running on different user , not mine..
ok.. so it seems it's running as www-data on my ec2 ubuntu.
So I have to create the /home/www-data/.bash_profile and set the env variable?
It seems like so much hassle to use a different settings file.
Is there an obviously easier way to do this?
(I don't want to change the wsgi.py file, because it's source controlled)
Using bash_profile is completely the wrong way to do this.
The correct way is to use the wsgi.py file. However, since you don't want to do this (although I don't understand what it being version-controlled has to do with anything) then you can set environment variables directly in your Apache configuration using SetEnv:
SetEnv DJANGO_SETTINGS_MODULE mysite.settings
Well, it's really wrong way. Common method to have separate settings for different environment is to store environment-dependent in local_settings.py (or whatever you name it) and imprort from settings.py
from local_settings.py import *
don't put local_settings.py under project repository as you will override it with each commit. If you want to keep a sample of local settings, put into a separate file, e.g. local_settings.py.example
You can import local settings at the beginning of settings.py (so settings.py settings would override local settings), or at the end, or have two local settings files for both cases.