I'm trying to separate out Django's secret key and DB pass into environmental variables, as widely suggested, so I can use identical code bases between local/production servers.
The problem I am running into is correctly setting and then reading the environmental vars on the production server running Apache + mod_wsgi.
Vars set in my user profile aren't available because Apache isn't run as that user. Vars set in the Virtual Hosts file with SetEnv are not available because the scope is somehow different.
I've read a couple 1,2 of SO answers, which leads to this blog with a solution.
I can't figure out how to apply the solution to current versions of Django which use a wsgi.py file, which looks like:
import os
os.environ.setdefault("DJANGO_SETTINGS_MODULE", "project.settings")
from django.core.wsgi import get_wsgi_application
application = get_wsgi_application()
How can I apply that blog solution to the wsgi.py file, or is there a better place to store env-vars where Django can get at them?
If anyone else is frustrated by Graham's answer, here is a solution that actually works for the original question. I personally find setting environmental variables from Apache to be extremely useful and practical, especially since I configure my own hosting environment and can do whatever I want.
wsgi.py (tested in Django 1.5.4)
from django.core.handlers.wsgi import WSGIHandler
class WSGIEnvironment(WSGIHandler):
def __call__(self, environ, start_response):
os.environ['SETTINGS_CONFIG'] = environ['SETTINGS_CONFIG']
return super(WSGIEnvironment, self).__call__(environ, start_response)
application = WSGIEnvironment()
Of minor note, you lose the future-proof method of django.core.wsgi.get_wsgi_application, which currently only returns WSGIHandler(). If the WSGIHandler.__call__ method is ever updated and you update Django also, you may have to update the WSGIEnvironment class if the arguments change. I consider this a very small penalty to pay for the convenience.
FWIW. Relying on environment variables for fine grained configuration settings are in general not a good idea. This is because not all WSGI hosting environments or commercial PaaS offerings support the concept. Using environment variables for fine grained settings can also effectively lock you into a specific PaaS offering where you have directly embedded a lookup of a specifically named environment variable directly into your code, where the naming convention of that environment variable is specific to that hosting service. So although use of environment variables is pushed by certain services, always be careful of being dependent on environment variables as it will reduce the portability of your WSGI application and make it harder to move between deployment mechanisms.
That all said, the blog post you mention will not usually help. This is because it is using the nasty trick of setting the process environment variables on each request based on the per request WSGI environ settings set using SetEnv in Apache. This can cause various issues in a multi threading configuration if the values of the environment variables can differ based on URL context. For the case of Django, it isn't helpful because the Django settings module would normally be imported before any requests had been handled, which means that the environment variables would not be available at the time required.
This whole area of deployment configuration is in dire need of a better way of doing things, but frankly it is mostly a lost cause because hosting services will not change things to accommodate a better WSGI deployment strategy. They have done their work, have their customers locked into the way they have done it already and aren't about to create work for themselves and change things even if a better way existed.
Anyway, 'all problems in computer science can be solved by another level of indirection'. (http://en.wikipedia.org/wiki/Indirection) and that is what you can do here.
Don't have your application lookup environment variables. Have it import a deployment specific Python configuration module which contains a means of using an API to get the configuration settings. This configuration module would implement different ways of getting the actual settings based on the deployment mechanism. In some cases it could grab the values from environment variables. For other such as with Apache/mod_wsgi, the values could be in that configuration module, or read from a separate configuration file which could be an ini, json or yaml format. In providing a API, it can also map names of configuration settings to different names used by different PaaS offerings.
This configuration module doesn't need to be a part of your application code, but could be manually place into a subdirectory of '/etc/' on the target system. You just then need to set the Python module search path so your application can see it. The whole system could be made quite elegant as part of a wider better standard for WSGI deployment, but as I said, little incentive to do the hard work of creating such a thing when existing PaaS offerings are highly unlikely to change to use such a standard.
Here's an alternative solution that's as future-proof as get_wsgi_application. It even lets you set environment variables to use in your Django initialization.
# in wsgi.py
KEYS_TO_LOAD = [
# A list of the keys you'd like to load from the WSGI environ
# into os.environ
]
def loading_app(wsgi_environ, start_response):
global real_app
import os
for key in KEYS_TO_LOAD:
try:
os.environ[key] = wsgi_environ[key]
except KeyError:
# The WSGI environment doesn't have the key
pass
from django.core.wsgi import get_wsgi_application
real_app = get_wsgi_application()
return real_app(wsgi_environ, start_response)
real_app = loading_app
application = lambda env, start: real_app(env, start)
I'm not 100% clear how mod_wsgi manages its processes, but I assume it doesn't re-load the WSGI app very often. If so, the performance penalty from initializing Django will only happen once, inside the first request.
Alternatively, if you don't need to set the environment variables before initializing Django, you can use the following :
# in wsgi.py
KEYS_TO_LOAD = [
# A list of the keys you'd like to load from the WSGI environ
# into os.environ
]
from django.core.wsgi import get_wsgi_application
django_app = get_wsgi_application()
def loading_app(wsgi_environ, start_response):
global real_app
import os
for key in KEYS_TO_LOAD:
try:
os.environ[key] = wsgi_environ[key]
except KeyError:
# The WSGI environment doesn't have the key
pass
real_app = django_app
return real_app(wsgi_environ, start_response)
real_app = loading_app
application = lambda env, start: real_app(env, start)
For Django 1.11:
Apache config:
<VirtualHost *:80 >
...
SetEnv VAR_NAME VAR_VALUE
</VirtualHost>
wsgi.py:
import os
import django
from django.core.handlers.wsgi import WSGIHandler
class WSGIEnvironment(WSGIHandler):
def __call__(self, environ, start_response):
os.environ["VAR_NAME"] = environ.get("VAR_NAME", "")
return super(WSGIEnvironment, self).__call__(environ, start_response)
os.environ.setdefault("DJANGO_SETTINGS_MODULE", "project.settings")
django.setup(set_prefix=False)
application = WSGIEnvironment()
I ran into the same problem of separating production and development code while tracking both of them into version control. I solved the problem using different wsgi scripts, one for production server and one for development server. Create two different setting files as mentioned here. And reference them in wsgi scripts. For example the following is wsgi_production.py file
...
import os
os.environ.setdefault("DJANGO_SETTINGS_MODULE", "project.settings.production")
...
and in wsgi_development.py file
...
import os
os.environ.setdefault("DJANGO_SETTINGS_MODULE", "project.settings.development")
...
Related
I am creating a package that itself uses Django and I will be using it within other Django applications. The main issue I am facing is that I need to use to settings for various reasons such as logging and other extensive requirements. Since, this package does not have any views/urls, we are writing tests and using pytest to run them. The tests will not run without the settings configured. So initially I put the following snippet in the __init__ file in the root app.
import os
import django
os.environ.setdefault("DJANGO_SETTINGS_MODULE", "my_package.settings")
django.setup()
Now, the test ran properly and the package as standalone app was working. But the moment I installed it in the main project, it overrides the enviroment variable with it's own settings and you can imagine the kind of havoc it would ensue.
This is the first time I am packaging a django app. So I am not well-versed with best practices and the docs are a little convoluted. I read the structure and code of various packages that use settings in their package but I am still not able to understand how to ensure the package accesses the intended settings and the project's settings is not affected at the same time.
While going throught the docs, I came accross this alternative to setting DJANGO_SETTINGS_MODULE, like this:
from django.conf import settings
settings.configure(DEBUG=True)
As shown here: https://docs.djangoproject.com/en/2.2/topics/settings/#using-settings-without-setting-django-settings-module
But where exactly am I supposed to add this? To every file where the settings are imported or will it work in the __init__ (Tried this but something isn't right, shows Apps aren't loaded )
I tried this as well where I imported my settings as defaults and called configure using them as defaults and called django.setup() as well but didn't do the trick:
# my_package/__init__.py
from django.conf import settings
from my_package import settings as default
if not settings.configured:
settings.configure(default_settings=default, DEBUG=True)
import django
django.setup()
Also, I need settings mainly because I have few parameters that can be overridden in the project that is using the package. When the package is installed, the overridden variables is what I should be able to access in the package during runtime.
If someone can guide on how to tackle this or have a better process of creating packages that need django settings, please do share.
So I ended up finding a way to work without setting the settings module as an environement variable. This enables me to use the specified settings by importing all the overridden settings as well as the default settings from:
Create a apps file for configuring your package as an app.
# my_package/apps.py
from django.apps import AppConfig
class MyPackageConfig(AppConfig):
name = 'my_package'
verbose_name = 'My package'
And, in your package's root. The following snippet in your __init__.py will only set the overridden settings:
# my_package/__init__.py
from django.conf import settings
import django
from my_package import settings as overridden_settings
from django.conf import settings
default_app_config = 'my_package.apps.MyPackageConfig'
if not settings.configured:
# Get the list of attributes the module has
attributes = dir(overridden_settings)
conf = {}
for attribute in attributes:
# If the attribute is upper-cased i.e. a settings variable, then copy it into conf
if attribute.isupper():
conf[attribute] = getattr(overridden_settings, attribute)
# Configure settings using the settings
settings.configure(**conf)
# This is needed since it is a standalone django package
django.setup()
Reference for what django.setup() will do:
https://docs.djangoproject.com/en/2.2/topics/settings/#calling-django-setup-is-required-for-standalone-django-usage
Points to keep in mind:
Since it is in the __init__, this will make sure if you import something from the package, the settings are configured.
As mentioned in the documentation above, you have to make sure that the settings is configured only once and similarly the setup method is called once or it will raise an Exception.
Let me know if this helps or you are able to come up with a better solution to this.
Deploying my django website with S3 as storage which runs fine locally to pythonanywhere gives a strange error I can't google a solution for:
"TypeError: a bytes-like object is required, not 'str'"
What I'm doing wrong?
I've tried to put my environment variables out of settings.env (aws keys, secret_key, etc) ad set them directly in my settings.py app. + every suggestion I could find but it's still the same :(
here's my /var/www/username_pythonanywhere_com_wsgi.py:
# +++++++++++ DJANGO +++++++++++
# To use your own Django app use code like this:
import os
import sys
from dotenv import load_dotenv
project_folder = os.path.expanduser('~/portfolio_pa/WEB') # adjust as appropriate
load_dotenv(os.path.join(project_folder, 'settings.env'))
# assuming your Django settings file is at '/home/myusername/mysite/mysite/settings.py'
path = '/home/corebots/portfolio_pa'
if path not in sys.path:
sys.path.insert(0, path)
os.environ['DJANGO_SETTINGS_MODULE'] = 'WEB.settings'
## Uncomment the lines below depending on your Django version
###### then, for Django >=1.5:
from django.core.wsgi import get_wsgi_application
application = get_wsgi_application()
###### or, for older Django <=1.4
#import django.core.handlers.wsgi
#application = django.core.handlers.wsgi.WSGIHandler()
I'd expect the website to run fine just like it does locally.
Boto library doesn't have a good Python3 support. This particular issue is known in the boto bugtracker: https://github.com/boto/boto/issues/3837
The best way of fixing this is to use boto3 which has decent Python3 support and is a generally most supported AWS SDK for Python.
The reason why it works on your local machine and doesn't work on production is that pythonanywhere setup seems to be using proxy which triggers this incompatible boto code. See the actual calling code: https://github.com/boto/boto/blob/master/boto/connection.py#L747
Your error traceback confirms this.
Unfortunately, I'm not familliar with the django-photologue, but a brief look doesn't suggest that it strongly depends on boto3. Maybe I'm wrong.
I still think that the best way is to go with boto3. As a backup strat you can fork boto with a fix for this issue and install that instead of the official one from PyPI: https://github.com/boto/boto/pull/3699
I'm writing my first Flask application. The application itself runs fine. I just have a newbie question about logging in production mode.
The basic structure:
app/
app/templates/
app/static
config.py
flask/... <- virtual env with flask + extensions
run.py
The application is started by run.py script:
#!flask/bin/python
import os.path
import sys
appdir = os.path.dirname(os.path.abspath(__file__))
if appdir not in sys.path:
sys.path.insert(1, appdir)
from app import app as application
if __name__ == '__main__':
application.run(debug=True)
and is started either directly or from an Apache 2.4 web server. I have these lines in the apache config:
WSGIPythonHome /usr/local/opt/app1/flask
WSGIScriptAlias /app1 /usr/local/opt/app1/run.py
In the former case, the debug=True is all I need for the development.
I'd like to have some logging also for the latter case, i.e. when running under Apache on a production server. Following is a recommendation from the Flask docs:
if not app.debug:
import logging
from themodule import TheHandlerYouWant
file_handler = TheHandlerYouWant(...)
file_handler.setLevel(logging.WARNING)
app.logger.addHandler(file_handler)
It needs some customization, but that's what I want - instructions for the case when app.debug flag is not set. Similar recommendation was given also here:
How do I write Flask's excellent debug log message to a file in production?
Please help: where do I have to put this code?
UPDATE: based on the comments by davidism and the first answer I've got I think the app in the current simple form is not suitable for what I was asking for. I will modify it to use different sets of configuration data as recommended here: http://flask.pocoo.org/docs/0.10/config/#development-production . If my application were larger, I would follow the pech0rin's answer.
UPDATE2: I think the key here is that the environment variables should control how the application is to be configured.
I have had a lot of success with setting up my logging configurations inside a create_app function. This uses the application factory pattern. This allows you to pass in some arguments or a configuration class. The application is then specifically created using your parameters.
This allows you initialize the application, setup logging, and do whatever else you want to do, before the application is sent back to be run.
For example:
def create_app(dev=False):
app = Flask(__name__)
if dev:
app.config['DEBUG'] = True
else:
...
app.logger.addHandler(file_handler)
return app
This has worked very well for me in production environments. YMMV
I have been running into this problem for a short while now and simply can't find a solution anywhere. I am using Google App Engine to run a default Python 2.7 app with Django 1.5 (via GAE SDK) created through PyCharm. I can upload the app successfully, but upon visiting the actual page, I get a Server Error. Then, checking the logs in Google App Engine, I see this:
ImportError: <module 'main' from '/base/data/home/apps/s~eloquent-ratio-109701/1.388053784931450315/main.pyc'> has no attribute application
After searching the internet for a while, I was able to find a few posts which address this issue, but attempting them never seemed to solve my problem. For example: This problem was solved by replacing "application" with "app" in the following lines:
application = django.core.handlers.wsgi.WSGIHandler()
util.run_wsgi_app(application)
In fact, I had run into this same issue before and this solution provided a fix for me in the past, however at that time I was running a separate app and it was not through the GAE.
I checked the Django documentation for version 1.5 here, but the code and suggestions there don't seem to conflict with what I currently have in my project.
I read a bit more about this type of problem, and saw another post that suggested checking the app's wsgi.py file to ensure that it is named 'application' or 'app' respectively, so that one could then use that same name throughout the rest of the application. However, upon checking those settings I saw that 'application' was used there too:
from django.core.wsgi import get_wsgi_application
application = get_wsgi_application()
There's even a line in settings.py which uses the same nomenclature to declare the WSGI application:
WSGI_APPLICATION = 'Chimera.wsgi.application'
I'm really having trouble debugging this. I get the feeling it's really dumb and I just can't see it, but unfortunately I'm not particularly good at this kind of stuff -- I'm still a bit of a novice in this field.
Does anyone have any idea what I could try in an attempt to fix this issue?
UPDATE: I started making line by line changes and testing things, and eventually I found that the GAE log changes depending on the input for the "script" under app.yaml. So if I change the script under "handlers" between "main.app" and "main.application", it adjusts the log output to refer to "app" or "application" respectively. So that line in the app.yaml file tells the app what to look for, but I'm still not seeing why it can't be found. Not sure what else I could change to test it out. I wish I knew a bit more about the actual inner workings so that I could figure out why the app is confused about the attribute. Is it trying to run before it even gets instantiated or something?
Source code below:
main.py
import os, sys
os.environ['DJANGO_SETTINGS_MODULE'] = 'Chimera.settings'
from google.appengine.ext.webapp import util
from django.conf import settings
settings._target = None
import django.core.handlers.wsgi
import django.core.signals
import django.db
import django.dispatch.dispatcher
def main():
application = django.core.handlers.wsgi.WSGIHandler()
util.run_wsgi_app(application)
if __name__ == '__main__':
main()
app.yaml
application: eloquent-ratio-109701
version: 1
runtime: python27
api_version: 1
threadsafe: true
handlers:
- url: /.*
script: main.application
libraries:
- name: django
version: 1.5
wsgi.py
import os
os.environ.setdefault("DJANGO_SETTINGS_MODULE", "Chimera.settings")
from django.core.wsgi import get_wsgi_application
application = get_wsgi_application()
Full log from GAE:
Traceback (most recent call last):
File "/base/data/home/runtimes/python27/python27_lib/versions/1/google/appengine/runtime/wsgi.py", line 240, in Handle
handler = _config_handle.add_wsgi_middleware(self._LoadHandler())
File "/base/data/home/runtimes/python27/python27_lib/versions/1/google/appengine/runtime/wsgi.py", line 302, in _LoadHandler
raise err
ImportError: <module 'main' from '/base/data/home/apps/s~eloquent-ratio-109701/1.388053784931450315/main.pyc'> has no attribute application
Thanks for helping me out.
In your main.py file (i.e. the main module) application is a variable inside the main() function, not an attribute of the main module. Basically you don't need a main() function.
GAE has some specific support for using Django, I'd strongly suggest going through the Django Support documentation and the Django App example.
Based on the comment made by #DanielRoseman I discovered that declaring the app inside of the main() function caused an issue because the app attribute was then only accessible at the main() function level, as it was a member variable of main() as opposed to a global variable. Although the default application files were structured this way by PyCharm, it seems that it was incorrect. I'm not sure if this is a compatibility issue, but regardless, moving the app declaration outside of the main() function adjusts the scope in a way which allows for other parts of the project to access it, solving my problem.
Thank you #DanielRoseman for the comment.
I want to read environment variables from Apache vhost config file and store them into Django settings.
Before updating to Django 1.7 everything was fine but now it is broken.
It seems the problem is in my wsgi.py script, when I call
_application = get_wsgi_application()
because it reads config file before the environment variable is set.
Is there another way to do this in Django 1.7?
In my /etc/apache2/sites-enabled/mysyte.conf I have:
<VirtualHost *:80>
...
SetEnv SECRET_KEY ...
SetEnv EMAIL_HOST ...
SetEnv EMAIL_HOST_PASSWORD ...
SetEnv EMAIL_HOST_USER ...
SetEnv EMAIL_PORT 25
...
In my wsgi.py:
import os
from os.path import abspath, dirname
from sys import path
SITE_ROOT = dirname(dirname(abspath(__file__)))
path.append(SITE_ROOT)
os.environ.setdefault("DJANGO_SETTINGS_MODULE", "locacle.settings.production")
_application = get_wsgi_application()
def application(environ, start_response):
for key, value in environ.items():
if isinstance(environ[key], str):
os.environ[key] = environ[key]
return _application(environ, start_response)
In my settings.py I have:
from os import environ
from base import *
def get_env_setting(setting):
""" Get the environment setting or return exception """
try:
return environ[setting]
except KeyError:
error_msg = "Set the %s env variable" % setting
raise ImproperlyConfigured(error_msg)
EMAIL_HOST = get_env_setting('EMAIL_HOST')
...
This is what log file reports:
...
mod = importlib.import_module(self.SETTINGS_MODULE)
File "/usr/lib/python2.7/importlib/__init__.py", line 37, in import_module
__import__(name)
File "/home/www/.../settings/production.py", line 34, in <module>
EMAIL_HOST = get_env_setting('EMAIL_HOST')
File "/home/www/...settings/production.py", line 21, in get_env_setting
raise ImproperlyConfigured(error_msg)
ImproperlyConfigured: Set the EMAIL_HOST env variable
...
I'm afraid that what you are trying to do here is inherently fragile, only worked by lucky accident on previous versions of Django, and will not work with Django 1.7 or any future version of Django. (Update: it also would make you potentially vulnerable to the "shellshock" bash bug, whereas WSGI normally isn't.)
The basic problem is that that WSGI environ is only available on a per-request basis, but you are trying to set global configuration for your Django process based on it. This is inefficient and conceptually broken (why are you re-setting OS environment variables again and again every single time a request comes in? what if different requests have a different WSGI environ?), and it can only work at all if Django waits to configure itself until the first request arrives.
But the unpredictable timing and ordering of Django's startup sequence in previous versions caused problems. For instance, when using runserver in local development Django would eagerly configure itself, due to validation checks, but under a production environment it would only configure itself lazily (which you were relying on), meaning that sometimes imports would happen in a different order and circular imports would crop up in production that weren't reproducible under runserver.
Django 1.7 includes a revamped startup sequence in order to address these problems, make the startup sequence predictable and consistent between development and production, and allow users to explicitly register code to run at startup time (via AppConfig.ready()). A side effect of this is that settings are configured when your process starts up (specifically, with the call to django.setup() in get_wsgi_application()), not waiting until the first request comes in.
If you're only running one Django site on this server, I would simply move your configuration into normal environment variables rather than SetEnv in your Apache config, and avoid the whole problem.
If you're running multiple Django sites which require different config through a single Apache server, that won't work. In that case, perhaps someone more familiar with Apache and mod_wsgi can give you advice on how to pass environment variables from your Apache config to your Django process in a reliable way; I've personally found the process model (actually, process models, since there's more than one depending how you configure it) of mod_wsgi confusing and error-prone when trying to run multiple independently-configured sites through one server. I find it simpler to use a dedicated WSGI server like gunicorn or uwsgi and have Apache (or nginx) proxy to it. Then it's simple to run multiple gunicorn/uwsgi processes with separate OS environments.