I have a Django 2.2 project and all secrets are in .env file.
I'm using a library dotenv to load .env to the Django application in the manage.py file
import dotenv
def main():
# Read from .env file
env_file = os.path.join(os.path.dirname(os.path.realpath(__file__)), '.env')
dotenv.read_dotenv(env_file)
....
The environment file is working and is loaded well when running locally.
On the server, I'm using the supervisor to run the application with the following configuration.
[supervisord]
[program:myapp]
command=/var/www/html/app/start_gunicorn.sh
directory=/var/www/html/app/
autostart=true
autorestart=true
stopasgroup=true
stopsignal=QUIT
logfile=/home/ubuntu/log/supervisor/supervisor.log
logfile_maxbytes=5MB
logfile_backups=10
loglevel = info
stderr_logfile=/home/ubuntu/log/supervisor/qcg-backend.err.log
stdout_logfile_maxbytes=5MB
stdout_logfile_backups=10
stdout_logfile=/home/ubuntu/log/supervisor/qcg-backend.out.log
stderr_logfile_maxbytes=5MB
stderr_logfile_backups=10
But the environment variables are not loaded and not working in Django.
Running following command from SSH console is working.
python manage.py shell
import os
os.environ.get('DEBUG')
> True
But running the application, environment variables are not accessible and not applied in the application.
manage.py is not invoked when running Django in production. From the dotenv docs, it says that you should add the loader code to the top of wsgi.py as well.
I think putting it on settings.py is more convenient. No need to add it to both manage.py and wsgi.py
Related
For local development I simply set .env file that contain necessarily variables and just run the app with:
Flask run
Everything seems fine, all the environment variables are being read and set correctly in the app.
However when I run the app with gunicorn:
gunicorn api:app --bind 127.0.0.1:5050
I can see clearly that env variables are not loaded.
Only If I set them explicitly in gunicorn command:
gunicorn api:app --bind 127.0.0.1:5057 -e POSTGRES_DB=postgresql://XXXXX
then it will work.
However since I can have many environment variables this is not really feasible. Is there a way to set this using file?
Gunicorn can read gunicorn.conf.py file which is just a normal python file where you can set your environment variables:
# gunicorn.conf.py
import os
os.environ['POSTGRES_DB'] = "postgresql://XXXXX"
You can even tell it to load your .env files with something like:
# gunicorn.conf.py
import os
from dotenv import load_dotenv
for env_file in ('.env', '.flaskenv'):
env = os.path.join(os.getcwd(), env_file)
if os.path.exists(env):
load_dotenv(env)
I've setted up a Django project using settings modules, a base.py, development.py, and production.py but I'm a little confused on how to run the project locally and remotely.
So, I can run python manage.py runserver --settings=<project>.settings.development and everything goes alright but my doubt is: How can I run it through heroku local giving the settings option? It will also help to run it on remotely with production settings.
You can make use of Environment variables
In __init__.py of your settings directory add:
import os
from .base import *
environment = os.environ.get("ENV")
if environment == development:
from .development import *
else:
from .production import *
this will load the development settings module if the value of ENV is set to development, otherwise by default it will load the production settings module.
Configuring environment variables in Heroku
Several configuration files exist.
If these files have different names
How do I change the settings file every time I run this command?
"python manage.py runserver"
Its so simple
read Main Django Tutorial, its all about setting django configuration
a shortcut for using in runserver command is --settings= and this also works with uwsgi
but if you intend to change setting without re-running the server django-constance is the answer
you can add to manage.py file
def main():
"""Run administrative tasks."""
os.environ.setdefault('DJANGO_SETTINGS_MODULE', 'project_name.settings_name')
After that run
py manage.py ----
I am using cookiecutter-django .env design to load different settings depending on environment. Running locally should use "local.py" settings and wunning in aws elatic beanstalk, it should load "dev.py". Both import from "common.py".
Running the server in AES with dev settings works, but collectstatic fails, because it tries importing the local settings instead of dev settings.
How can the EC2 instance run collectstatic and load the (appropriate) dev.py settings?
OK, found it. The manage.py file looked like this
if __name__ == '__main__':
os.environ.setdefault('DJANGO_SETTINGS_MODULE', 'config.settings.local')
forcing all commands to run with local settings, instead of loading from the .env file.
I have changet it to
import environ
ROOT_DIR = environ.Path(__file__) - 1
env = environ.Env()
env.read_env(ROOT_DIR.file('config/settings/.env'))
if __name__ == '__main__':
os.environ.setdefault('DJANGO_SETTINGS_MODULE', env('DJANGO_SETTINGS_MODULE', default='config.settings.local'))
Which allows manage.py commands to run using whatever settings I have actually specified.
This question asks but the answer isn't about using Gunicorn.
How do I correctly set DJANGO_SETTINGS_MODULE to production in my production environment, and to staging in my staging environment?
I have two settings files - staging.py and production.py.
Because I was having trouble setting the variables, I simply made my default lines in manage.py and wsgi.py look like:
manage.py
os.environ.setdefault("DJANGO_SETTINGS_MODULE", "settings.production")
wsgi.py
import os
os.environ.setdefault("DJANGO_SETTINGS_MODULE", "settings.production")
from django.core.wsgi import get_wsgi_application
application = get_wsgi_application()
...so that in production, no matter the shenanigans with this pesky variable, my production app would remain on production settings if DJANGO_SETTINGS_MODULE wasn't set.
The problem is that I want my staging app to remain on staging settings so that emails don't go out from that (separate) server.
I have the above files in staging, as well as these attempts to properly set settings.staging:
gunicorn.conf:
description "Gunicorn daemon for Django project"
start on (local-filesystems and net-device-up IFACE=eth0)
stop on runlevel [!12345]
# If the process quits unexpectadly trigger a respawn
respawn
setuid django
setgid django
chdir /src
script
export DJANGO_SETTINGS_MODULE="settings.staging"
exec /opt/Envs/mysite/bin/gunicorn \
--name=mysite \
--pythonpath=/opt/Envs/mysite/lib/python2.7/site-packages \
--bind=127.0.0.1:9000 \
--config /src/bin/gunicorn/gunicorn.py \
mysite.wsgi:application
end script
Also, a file named /etc/profile.d/myenvvars.sh that contains:
export DJANGO_SETTINGS_MODULE=settings.staging
And finally, I'm using virtualenvwrapper, with this line in /opt/Envs/myappenv/bin:
export DJANGO_SETTINGS_MODULE=settings.staging
As you can see, I'm trying the belt and suspenders technique to keep settings to staging on the staging server. However, despite these FOUR ways of trying to set DJANGO_SETTINGS_MODULE=settings.staging, it's still sometimes defaulting to settings.production and sending out emails.
What is the proper way to set DJANGO_SETTINGS_MODULE once and for all on both my staging and production servers?