Supervising virtualenv django app via supervisor - django

I'm trying to use supervisor in order to manage my django project running gunicorn inside a virtualenv.
My conf file looks like this:
[program:diasporamas]
command=/var/www/django/bin/gunicorn_django
directory=/var/www/django/django_test
process_name=%(program_name)s
user=www-data
autostart=false
stdout_logfile=/var/log/gunicorn_diasporamas.log
stdout_logfile_maxbytes=1MB
stdout_logfile_backups=2
stderr_logfile=/var/log/gunicorn_diasporamas_errors.log
stderr_logfile_maxbytes=1MB
stderr_logfile_backups=2enter code here
The problem is, I need supervisor to launch the command after it has run 'source bin/activate' in my virtualenv. I've been hanging around google trying to find an answer but didn't find anything.
Note: I don't want to use virtualenvwrapper
Any help please?

The documentation for the virtualenv activate script says that it only modifies the PATH environment variable, in which case you can do:
[program:diasporamas]
command=/var/www/django/bin/gunicorn_django
directory=/var/www/django/django_test
environment=PATH="/var/www/django/bin"
...
Since version 3.2 you can use variable expansion to preserve the existing PATH too:
[program:diasporamas]
command=/var/www/django/bin/gunicorn_django
directory=/var/www/django/django_test
environment=PATH="/var/www/django/bin:%(ENV_PATH)s"
...

Related

Where do I set environment variables for Django?

everyone!
Django 1.11 + PostgreSQL 9.6 + Gunicorn + Ubuntu 16.04 in AWS
I want to set environment variables for sensitive info.(django secret key, DB password...)
I studied many articles about setting ways.
But when I tried os.environ['env_name'],
.bashrc: Not working
.bash_profile: Not working
.profile: Not working
/etc/environment: Not working
Gunicorn script file.(systemd): I set them in gunicorn systemd script. It work very well.
But because I want to use the environment variables in other program too, I set them among 1~5 configurations. I don't understand why 1~5 configurations didn't work. Is there scope or priority of setting environment variables?
EDIT:
I use Ubuntu 16.04 server. I can't restart terminal session.
I tried 'source .bashrc' and logout/login. But It didn't work.
Of cource, 'echo $some_env_var' is working, I say, django can't read.
.bashrc will work for local development but not for a production environment. I just spent quite a bit of time looking for the answer to this and here's what worked for me:
1) Create a file somewhere on your server titled settings.ini. I did this in /etc/project/settings.ini
2) Add your config data to that file using the following format where the key could be an environmental variable and the value is a string. Note that you don't need to surround the value in quotes.
[section]
secret_key_a=somestringa
secret_key_b=somestringb
3) Access these variables using python's configparser library. The code below could be in your django project's settings file.
from configparser import RawConfigParser
config = RawConfigParser()
config.read('/etc/project/settings.ini')
DJANGO_SECRET = config.get('section', 'secret_key_a')
Source: https://code.djangoproject.com/wiki/SplitSettings (ini-style section)
The simplest solution is as already mentioned using os.environ.get and then set your server environment variables in some way (config stores, bash files, etc.)
Another slightly more sophisticated way is to use python-decouple and .env files. Here's a quick how-to:
1) Install python-decouple (preferably in a venv if not using Docker):
pip install python-decouple
2) Create a .env file in the root of your Django-project, add a key like;
SECRET_KEY=SomeSecretKeyHere
3) In your settings.py, or any other file where you want to use the configuration values:
from decouple import config
...
SECRET_KEY = config('SECRET_KEY')
4) As you probably don't want these secrets to end up in your version control system, add the file to your .gitignore. To make it easier to setup a new project, you could have a .env_default checked into the VCS containing default/dummy-values that's not used in production.
create a file called .bashrc in your server
export('the_name_in_bashrc', some_value)
then in the settings.py
import os
some_variable = os.environ.get('the_name_in_bashrc')
If you're using a virtual ENV you can add the environment variables to that specific environment. You can use export KEY=VALUE in your terminal but that will not persist. If you would like your values to persist you can edit the file:
sudo nano your_environment/bin/activate
Then at the bottom add the values you want e.g.:
export MY_KEY="12345"
And save. Remember to restart your ENV for changes to take effect.
pip install python-dotenv
Go To settings.py
import os
from dotenv import load_dotenv
load_dotenv('.env')
SECRET_KEY = os.getenv('SECRET_KEY')
Go To .env file
SECRET_KEY = your_secret_key

How does a django app start it's virtualenv?

I'm trying to understand how virtual environment gets invoked. The website I have been tasked to manage has a .venv directory. When I ssh into the site to work on it I understand I need to invoke it with source .venv/bin/activate. My question is: how does the web application invoke the virtual environment? How do I know it is using the .venv, not the global python?
More detail: It's a Drupal website with Django kind of glommed onto it. Apache it the main server. I believe the Django is served by gunicorn. The designer left town
_
Okay, I've found how, in my case, the virtualenv was being invoked for the django.
BASE_DIR/run/gunicorn script has:
#GUNICORN='/usr/bin/gunicorn'
GUNICORN=".venv/bin/gunicorn"
GUNICORN_CONF="$BASE_DIR/run/gconf.py"
.....
$GUNICORN --config $GUNICORN_CONF --daemon --pid $PIDFILE $MODULE
So this takes us into the .venv where the gunicorn script starts with:
#!/media/disk/www/aavso_apps/.venv/bin/python
Voila
Just use absolute path when calling python in virtualenv.
For example your virtualenv is located in /var/webapps/yoursite/env
So you must call it /var/webapps/yoursite/env/bin/python
If you use just Django behind a reverse proxy, Django will use whatever is the python environment for the user that started the server was determined in which python command. If you're using a management tool like Gunicorn, you can specify which environment to use in those configs. Although Gunicorn itself requires us to activate the virtual environment before invoking Gunicorn
EDIT:
Since you're using Gunicorn, take a look at this, https://www.digitalocean.com/community/tutorials/how-to-deploy-python-wsgi-apps-using-gunicorn-http-server-behind-nginx

Unable to run setup.py behind proxy

I'm new to python (and linux) and I'm trying to run the setup.py, however it's not working properly because there's a corporative proxy blocking te request to pypi.
I check this link to properly use the setup.py and also check this and this solutions in stackoverflow but I can't make them work (or I'm wrong in the way I'm applying them).
I'm using:
virtualenv
virtualenvwrapper
python 2.7
Ubuntu 14
I already add the http_proxy and https_proxy in .profile and .bashrc.
When I use pip install --proxy the.proxy:port some_module it's working properly (also I know the env variables do something is because before that I can't even get to stackoverflow.com, so I'm assuming they work just fine).
What I have already tried is:
Trying to use --proxy on python
Look for something similar to --proxy in python
Trying to add the proxy configuration described in one of the solutions mentioned earlier in my setup.py (which is add to the description of this problem)
Tried and successfully downloaded a couple of modules with pip --proxy (this is my current not-so-good-solution)
Messing with the python configuration files in the virtualenv in hope of find some proxy config
My setup.py file looks like this:
from setuptools import setup, find_packages
import requests
with open('development.txt') as file:
install_requires = file.readlines()
with open('development_test.txt') as file_test:
test_requires = file_test.readlines()
setup(
name="my_project",
version="1.0.0-SNAPSHOT",
packages=find_packages(),
install_requires=install_requires,
test_suite="nose.collector",
tests_require=test_requires,
)
proxies = {
"http": "http://proxy.myproxy.com:3333",
"https": "http://proxy.myproxy.com:3333",
}
# not sure what goes here... tried a few things but nothing happend
requests.get("https://pypi.python.org", proxies=proxies)
I'll try any suggestion, any help appreciated.
Thanks
After a deep search about how python works and not being able to find the problem I start looking to how the bash commands work.
It turn out you have to export the http_proxy variables with sudo -E.
A rocky mistake.

What is the entry point that Apache mod_wsgi on OpenShift Online is looking for?

I've got an OpenShift Python 2.7 application that I am guessing uses mod_wsgi.
Is it possible to ssh in to OpenShift Online and view the .conf files located somewhere like:
/etc/apache2/sites-available/
I am wanting to see which .wsgi file Apache looks for as defined in WSGIScriptAlias.
Perhaps it just looks for /wsgi/application?
A few posts indicate that changes have been made recently to the structure of Python applications, but they may not effect my older version:
How to change or override openshift.conf in Python 3.3 cartridge
https://blog.openshift.com/openshift-online-march-2014-release-blog/
WSGI Application not found on OpenShift
Ideally, I'm trying to comprehend the order in which these files are executed and their functions:
/wsgi/application
/wsgi/my-bottle-application
setup.py
setup.pyc
setup.pyo
UPDATE
This indicates the entry point is wsgi/application:
https://github.com/openshift/origin-server/search?utf8=%E2%9C%93&q=OPENSHIFT_PYTHON_WSGI_APPLICATION
I'd still be interested to know the order of execution of the above files and exactly what setup.py does and how it is executed - ie there are no references to it in application so how is it 'called'?.
According to this you can set your entry point: https://blog.openshift.com/openshift-online-march-2014-release-blog/
Python
For python app’s we’ve made some similar changes:
We got rid of wsgi/, wsgi/static/, data/ and libs/ directories.
You can use wsgi.py instead of wsgi/application as the default WSGI entry-point.
We’ve discarded the README.md file that can often conflict with an upstream file of the same name.
New OPENSHIFT_PYTHON_WSGI_APPLICATION to set an alternative WSGI entry-point.
wsgi.py WSGI entry-point (configurable by $OPENSHIFT_PYTHON_WSGI_APPLICATION)
setup.py Standard setup.py, specify deps here
.openshift/ Location for OpenShift specific files
action_hooks/ See the Action Hooks documentation
markers/ See the Markers section below
For more information on environment variables on OpenShift Online: https://developers.openshift.com/en/managing-environment-variables.html
Example that worked for me using the OPENSHIFT_PYTHON_WSGI_APPLICATION variable:
rhc env-set OPENSHIFT_PYTHON_WSGI_APPLICATION="${OPENSHIFT_REPO_DIR}/server/wsgi.py" --app MyApp

Is there a way to add custom django-admin.py commands that work outside of projects?

I'm trying to write a custom command that works outside of Django projects. I was thinking I could follow the coding patterns of Django's own such commands (e.g., startproject), include my command in an app and install it.
Alas, it seems django cannot see this command, as perhaps it doesn't scan site-packages for custom commands.
Is there a way to make this work or am I sadly correct?
UPDATE: I should note that the goal I was trying to accomplish (writing a command that starts projects based on custom templates) is supported in the coming 1.4 release of Django: https://docs.djangoproject.com/en/dev/ref/django-admin/#django-admin-startproject (see the --template option).
Based on this code from django.core.management, it does appear that django only searches for project-less commands in its own packages, and will then only find command by scanning INSTALLED_APPS, which means a project is required.
You can use a custom manage.py.
You do need a project. A project is, although, nothing more than a python package with a settings.py (and maybe a urls.py file)
So you could just create a project, with whatever commands you want, and in your setup script include a binary script that is nothing more than a manage.py in disguise.
I use it to have a manage.py in the bin path of a virtualenv, but you can call it something else and have that "django" project installed in your system python.
I don't quite understand from your post, for what purpose do You want to write such command using Django's manage.py. But suppose you want (as I was) to run some script, that works with Django models, for example. You cannot run such script without setting Django environment.
I do the following:
put my code in script.py
manage.py shell
execfile('script.py')
Maybe, this helps.