How does a django app start it's virtualenv? - django

I'm trying to understand how virtual environment gets invoked. The website I have been tasked to manage has a .venv directory. When I ssh into the site to work on it I understand I need to invoke it with source .venv/bin/activate. My question is: how does the web application invoke the virtual environment? How do I know it is using the .venv, not the global python?
More detail: It's a Drupal website with Django kind of glommed onto it. Apache it the main server. I believe the Django is served by gunicorn. The designer left town
_

Okay, I've found how, in my case, the virtualenv was being invoked for the django.
BASE_DIR/run/gunicorn script has:
#GUNICORN='/usr/bin/gunicorn'
GUNICORN=".venv/bin/gunicorn"
GUNICORN_CONF="$BASE_DIR/run/gconf.py"
.....
$GUNICORN --config $GUNICORN_CONF --daemon --pid $PIDFILE $MODULE
So this takes us into the .venv where the gunicorn script starts with:
#!/media/disk/www/aavso_apps/.venv/bin/python
Voila

Just use absolute path when calling python in virtualenv.
For example your virtualenv is located in /var/webapps/yoursite/env
So you must call it /var/webapps/yoursite/env/bin/python

If you use just Django behind a reverse proxy, Django will use whatever is the python environment for the user that started the server was determined in which python command. If you're using a management tool like Gunicorn, you can specify which environment to use in those configs. Although Gunicorn itself requires us to activate the virtual environment before invoking Gunicorn
EDIT:
Since you're using Gunicorn, take a look at this, https://www.digitalocean.com/community/tutorials/how-to-deploy-python-wsgi-apps-using-gunicorn-http-server-behind-nginx

Related

Running django-q using elastic beanstalk on aws linux 2 instances

I use Elastic Beanstalk on aws to host my webapp which needs a task runner like django q. I need to run it on my instance and am facing difficulty doing that. I found this script https://gist.github.com/codeSamuraii/0e11ce6d585b3290b15a9ad163b9aa06 which does what I need but its for the older version of ec2 instance. So far I know I must run django q post deployment, but is it possible to add the process to the procfile along with starting the wsgi server.
Any help that could point me in the right direction will be greatly appreciated.
You can create a "Procfile" at the root of your bundle with following content:
web: gunicorn --bind 127.0.0.1:8000 --workers=1 --threads=15 mysite.config.wsgi:application
qcluster: python3 manage.py qcluster
Obviously, replace "mysite.config.wsgi" with the path to your wsgi.
I ended up not finding a solution, i chose a different tech altogether to fulfill the requirements. It was a crontab making curl requests to a Django server. So on the Django admin I would create task routes linking it to modules in the file storage. And paste the route info in crontab setting and set the appropriate time interval.

How to deploy a django/gunicorn application as a systemd service?

Context
For the last few years, I have been only using docker to containerize and distribute django applications. For the first time, I am asked to set things so it could be deployed the old way. I am looking for a way to nearly achieve the levels of encapsulation and automation I am used to while avoiding any pitfalls
Current heuristic
So, the application (my_app):
is build as a .whl
comes with gunicorn
is published on a private Nexus and
can be installed on any of the client's server with a simple pip install
(it relies on Apache as reverse proxy / static files server and Postgresql but this is irrelevant for this discussion)
To make sure that the app will remain available, I thought about running it as a service managed by systemd. Based on this article, here is what /etc/systemd/system/my_app.service could look like
[Unit]
Description=My app service
After=network.target
StartLimitIntervalSec=0[Service]
Type=simple
Restart=always
RestartSec=1
User=centos
ExecStart=gunicorn my_app.wsgi:application --bind 8000
[Install]
WantedBy=multi-user.target
and, from what I understand, I could provide this service with environment variables to be read by os.environ.get() somewhere in /etc/systemd/system/my_app.service.d/.env
SECRET_KEY=somthing
SQL_ENGINE=somthing
SQL_DATABASE=somthing
SQL_USER=somthing
SQL_PASSWORD=somthing
SQL_HOST=somthing
SQL_PORT=somthing
DATABASE=somthing
Opened questions
First if all, are there any obvious pitfalls?
upgrade. Will restarting the service be enough to upgrade the application after a pip install --upgrade my_app?
library conflicts Is it possible to make sure that gunicorn my_app.wsgi:application will use the correct versions of libraries used by the project? Or do I need to use a virtualenv? If so, should source /path/to/the/venv/bin/activate be used in ExecStart or is there a cleaner way to make sure ExecStart runs in a separate virtual environment?
where should python manage.py migrate and python manage.py collectstatic live? Should I use those in /etc/systemd/system/my_app.service
Maybe you can try, but I'm also not sure if it works. This is for the third question.
You have to create a virtualenv, install gunicorn and django in it.
Create a file python like this "gunicorn_config.py"
Fill like this, or can be adjusted as needed:
#gunicorn_config.py
command = 'env/bin/gunicorn'
pythonpath = 'env/myproject'
bind = '127.0.0.1:8000'
in the systemd ExecStart fill env/bin/gunicorn -c env/gunicorn_config.py myproject.wsgi.

Dockerfile Customization for Google Cloud Run with Nginx and uWSGI

I'm trying to migrate a Django application from Google Kubernetes Engine to Google Cloud Run, which is fully managed. Basically, with Cloud Run, you containerize your application into a single Dockerfile and Google does the rest.
I have a Dockerfile which at one point does call a bash script via ENTRYPOINT
But I need to start Nginx and start Gunicorn. The Google Cloud Run documentation suggest starting Gunicorn like this:
CMD gunicorn -b :$PORT foo.wsgi
(let's sat my Django app is named "foo")
But I also need to start Nginx via:
CMD ["nginx", "-g", "daemon off;"]
And since only one CMD is allowed per Dockerfile, I'm not sure how to combine them.
To try to get around some of these difficulties, I was looking into using a Dockerfile to build from that already has both working and I came across this one:
https://github.com/tiangolo/meinheld-gunicorn-docker
But the paths don't quite match mine. Quoting from the documentation of that repo:
You don't need to clone the GitHub repo. You can use this image as a
base image for other images, using this in your Dockerfile:
FROM tiangolo/meinheld-gunicorn:python3.7
COPY ./app /app It will expect a file at /app/app/main.py.
And will expect it to contain a variable app with your "WSGI"
application.
My wsgi.py file ends up at /app/foo/foo/wsgi.py and contains an application named application
But if I understand that documentation correctly, when it says it will expect the WSGI application to be named app and to be located at /app/app/main.py it's basically saying that I need to revise the path and the variable name so that when it builds the image it knows that app is called application and that instead of finding it at /app/app/main.py it will find it at /app/foo/foo/wsgi.py
I assume that I can fix the app vs application variable name by adding a line to my wsgi.py file like app = application but I'm not sure how to correct the path that Docker expects.
Can someone explain to me how to adapt this to my needs?
(Or any other way to get it working)

Supervising virtualenv django app via supervisor

I'm trying to use supervisor in order to manage my django project running gunicorn inside a virtualenv.
My conf file looks like this:
[program:diasporamas]
command=/var/www/django/bin/gunicorn_django
directory=/var/www/django/django_test
process_name=%(program_name)s
user=www-data
autostart=false
stdout_logfile=/var/log/gunicorn_diasporamas.log
stdout_logfile_maxbytes=1MB
stdout_logfile_backups=2
stderr_logfile=/var/log/gunicorn_diasporamas_errors.log
stderr_logfile_maxbytes=1MB
stderr_logfile_backups=2enter code here
The problem is, I need supervisor to launch the command after it has run 'source bin/activate' in my virtualenv. I've been hanging around google trying to find an answer but didn't find anything.
Note: I don't want to use virtualenvwrapper
Any help please?
The documentation for the virtualenv activate script says that it only modifies the PATH environment variable, in which case you can do:
[program:diasporamas]
command=/var/www/django/bin/gunicorn_django
directory=/var/www/django/django_test
environment=PATH="/var/www/django/bin"
...
Since version 3.2 you can use variable expansion to preserve the existing PATH too:
[program:diasporamas]
command=/var/www/django/bin/gunicorn_django
directory=/var/www/django/django_test
environment=PATH="/var/www/django/bin:%(ENV_PATH)s"
...

Django virtualenv deployment configuration

I recently start to use virtualenvwrapper and created
mkdir ~/.virtualenvs
mkvirtualenv example.com
Virtualenvwarpper automatical create a virtualenv named example.com under ~/.virtualenv
so this is the central container for all virtualenvs.
After than I installed django and some other packages via pip
and my site is at
/srv/www/example.com/public_html/
Do I have to put my site to
~/.virtualenv/example.com
if not how could I use my example.com virtualenv with my site under /srv/www/example.com/public_html.
Could you show me an apache mod_wsgi configuration for this deployment?
Thanks
Read:
http://code.google.com/p/modwsgi/wiki/VirtualEnvironments
It may not be sufficient to use just site.addsitedir() as it doesn't deal with certain ordering issues. You are better off using the configuration directive/option provided by mod_wsgi to add them. Otherwise, if the ordering becomes an issue you will need to add code into WSGI script that reorders sys.path as necessary.
In your WSGI script:
import site
site.addsitedir('/home/username/.virtualenvs/example.com/lib/python2.5/site-packages')
(Adjust as appropriate for your Python version, etc.)