Running flask app with gunicorn and environment variables - flask

For local development I simply set .env file that contain necessarily variables and just run the app with:
Flask run
Everything seems fine, all the environment variables are being read and set correctly in the app.
However when I run the app with gunicorn:
gunicorn api:app --bind 127.0.0.1:5050
I can see clearly that env variables are not loaded.
Only If I set them explicitly in gunicorn command:
gunicorn api:app --bind 127.0.0.1:5057 -e POSTGRES_DB=postgresql://XXXXX
then it will work.
However since I can have many environment variables this is not really feasible. Is there a way to set this using file?

Gunicorn can read gunicorn.conf.py file which is just a normal python file where you can set your environment variables:
# gunicorn.conf.py
import os
os.environ['POSTGRES_DB'] = "postgresql://XXXXX"
You can even tell it to load your .env files with something like:
# gunicorn.conf.py
import os
from dotenv import load_dotenv
for env_file in ('.env', '.flaskenv'):
env = os.path.join(os.getcwd(), env_file)
if os.path.exists(env):
load_dotenv(env)

Related

Heroku django.core.exceptions.ImproperlyConfigured: Set the SECRET_KEY environment variable

I am trying to run a django command on my heroku production server, but get the following error:
Note: The same command works fine in my local dev environment.
I took the following steps:
ssh onto my django server:
heroku ps:exec -a library-backend
I run my custom command:
python manage.py test_command
Receive error above
My environment variables are set in my settings.py as follows:
import environ
# Setting environment variables
env = environ.Env(DEBUG=(bool, False))
environ.Env.read_env()
DEBUG = env('DEBUG')
SECRET_KEY = env('SECRET_KEY')
DATABASE_URL = env('DATABASE_URL')
My django app runs normally on the heroku server. I am only getting this error when I try to run a custom django management command.
Can anyone see where I'm going wrong?
For reference, the management command is specified in library/management/commands/test_command.py:
from django.core.management.base import BaseCommand
class Command(BaseCommand):
def handle(self, *args, **options):
print("Testing management command")
According to the documentation around heroku ps:exec:
The SSH session created by Heroku Exec will not have the config vars set as environment variables (i.e., env in a session will not list config vars set by heroku config:set).
Ok, so I figured this out at last with a hint in the right direction from the comment. When you run:
heroku ps:exec -a <myapp>
Heroku will give you an ssh session with access to the files and folders, but won't set any of your environment variables (like SECRET_KEY or DATABASE_URL)
To get an ssh session with environment variables set, instead used:
heroku run bash -a <myapp>
Now you can run your django command and you won't get any ImproperlyConfigured errors.

.env not working with Django with supervisor

I have a Django 2.2 project and all secrets are in .env file.
I'm using a library dotenv to load .env to the Django application in the manage.py file
import dotenv
def main():
# Read from .env file
env_file = os.path.join(os.path.dirname(os.path.realpath(__file__)), '.env')
dotenv.read_dotenv(env_file)
....
The environment file is working and is loaded well when running locally.
On the server, I'm using the supervisor to run the application with the following configuration.
[supervisord]
[program:myapp]
command=/var/www/html/app/start_gunicorn.sh
directory=/var/www/html/app/
autostart=true
autorestart=true
stopasgroup=true
stopsignal=QUIT
logfile=/home/ubuntu/log/supervisor/supervisor.log
logfile_maxbytes=5MB
logfile_backups=10
loglevel = info
stderr_logfile=/home/ubuntu/log/supervisor/qcg-backend.err.log
stdout_logfile_maxbytes=5MB
stdout_logfile_backups=10
stdout_logfile=/home/ubuntu/log/supervisor/qcg-backend.out.log
stderr_logfile_maxbytes=5MB
stderr_logfile_backups=10
But the environment variables are not loaded and not working in Django.
Running following command from SSH console is working.
python manage.py shell
import os
os.environ.get('DEBUG')
> True
But running the application, environment variables are not accessible and not applied in the application.
manage.py is not invoked when running Django in production. From the dotenv docs, it says that you should add the loader code to the top of wsgi.py as well.
I think putting it on settings.py is more convenient. No need to add it to both manage.py and wsgi.py

python manage.py collectstatic is loading the wrong (local) settings

I am using cookiecutter-django .env design to load different settings depending on environment. Running locally should use "local.py" settings and wunning in aws elatic beanstalk, it should load "dev.py". Both import from "common.py".
Running the server in AES with dev settings works, but collectstatic fails, because it tries importing the local settings instead of dev settings.
How can the EC2 instance run collectstatic and load the (appropriate) dev.py settings?
OK, found it. The manage.py file looked like this
if __name__ == '__main__':
os.environ.setdefault('DJANGO_SETTINGS_MODULE', 'config.settings.local')
forcing all commands to run with local settings, instead of loading from the .env file.
I have changet it to
import environ
ROOT_DIR = environ.Path(__file__) - 1
env = environ.Env()
env.read_env(ROOT_DIR.file('config/settings/.env'))
if __name__ == '__main__':
os.environ.setdefault('DJANGO_SETTINGS_MODULE', env('DJANGO_SETTINGS_MODULE', default='config.settings.local'))
Which allows manage.py commands to run using whatever settings I have actually specified.

What is the correct way to set DJANGO_SETTINGS_MODULE in a production/staging environments?

This question asks but the answer isn't about using Gunicorn.
How do I correctly set DJANGO_SETTINGS_MODULE to production in my production environment, and to staging in my staging environment?
I have two settings files - staging.py and production.py.
Because I was having trouble setting the variables, I simply made my default lines in manage.py and wsgi.py look like:
manage.py
os.environ.setdefault("DJANGO_SETTINGS_MODULE", "settings.production")
wsgi.py
import os
os.environ.setdefault("DJANGO_SETTINGS_MODULE", "settings.production")
from django.core.wsgi import get_wsgi_application
application = get_wsgi_application()
...so that in production, no matter the shenanigans with this pesky variable, my production app would remain on production settings if DJANGO_SETTINGS_MODULE wasn't set.
The problem is that I want my staging app to remain on staging settings so that emails don't go out from that (separate) server.
I have the above files in staging, as well as these attempts to properly set settings.staging:
gunicorn.conf:
description "Gunicorn daemon for Django project"
start on (local-filesystems and net-device-up IFACE=eth0)
stop on runlevel [!12345]
# If the process quits unexpectadly trigger a respawn
respawn
setuid django
setgid django
chdir /src
script
export DJANGO_SETTINGS_MODULE="settings.staging"
exec /opt/Envs/mysite/bin/gunicorn \
--name=mysite \
--pythonpath=/opt/Envs/mysite/lib/python2.7/site-packages \
--bind=127.0.0.1:9000 \
--config /src/bin/gunicorn/gunicorn.py \
mysite.wsgi:application
end script
Also, a file named /etc/profile.d/myenvvars.sh that contains:
export DJANGO_SETTINGS_MODULE=settings.staging
And finally, I'm using virtualenvwrapper, with this line in /opt/Envs/myappenv/bin:
export DJANGO_SETTINGS_MODULE=settings.staging
As you can see, I'm trying the belt and suspenders technique to keep settings to staging on the staging server. However, despite these FOUR ways of trying to set DJANGO_SETTINGS_MODULE=settings.staging, it's still sometimes defaulting to settings.production and sending out emails.
What is the proper way to set DJANGO_SETTINGS_MODULE once and for all on both my staging and production servers?

How to avoid putting environment variables into multiple places with Django, nginx and uWSGI?

I am trying to configure nginx+uWSGI to serve my Django application.
When I put environment variables into myapp_uwsgi.ini:
uid = username
gid = username
env = DJANGO_SITE_KEY="..."
it works as expected.
However, my app has some management commands which should also have access to the environment variables I have defined.
If I put the environment variables to /home/username/.bashrc:
export DJANGO_SITE_KEY="..."
uWSGI does not load them.
I have tried to put the environment variables into a separate file:
#!/bin/sh
export DJANGO_SITE_KEY="..."
and then call it from both .bashrc:
. /home/username/environment
and myapp_uwsgi.ini:
exec-pre-app = . /home/username/environment
In uWSGI logs I see this line:
running ". /home/username/environment" (pre app)...
But my Django app is unable to access the environment variables with os.environ.
I have also tried putting the export commands to the preactivate hook of virtualenvwrapper and use the virtualenv = setting of uWSGI, but it does not work too (I assume the hooks are only executed when using virtualenvwrapper commands like workon.
Here is the answer from uWSGI developers:
just place each of them (one per line) in a text file in the form
VAR=VALUE
then in uWSGI config
[uwsgi]
for-readline = yourfile
env = %(_)
endfor =
This also works with yml config files:
for-readline: filename
env: %(_)
endfor:
I use django-dotenv. Put your env vars in a file like .env inside your project, and then load it in manage.py and wsgi.py. No other config required. uwsgi and manage.py commands will work as expected, and all your env vars are stored in just one file.
Another approach is using configuration management systems such as Salt or Ansible.
With them one can create Jinja templates for both uWSGI and Django with {{ variables }} defined in one place.