Gunicorn Environment Variable Setting - django

I'm currently having difficulty passing environment variables into Gunicorn for my Django project. I'm on the latest 19.1 version. I have a wsgi.py file like so:
import os
import sys
from django.core.wsgi import get_wsgi_application
BASE_DIR = os.path.dirname(os.path.abspath(__file__))
PROJECT_DIR = os.path.abspath(os.path.join(BASE_DIR, '..'))
sys.path.append(PROJECT_DIR)
os.environ.setdefault("DJANGO_SETTINGS_MODULE", "app.settings")
def application(environ, start_response):
_application = get_wsgi_application()
os.environ['SERVER_ENV'] = environ['SERVER_ENV']
os.environ['SERVER_ID'] = environ['SERVER_ID']
return _application(environ, start_response)
When I run gunicorn from the command line as:
SERVER_ENV=TEST SERVER_ID=TEST gunicorn -b 127.0.0.1:8080 --error-logfile - --access-logfile - app.wsgi:application
and I then pass a request to gunicorn I keep getting:
2014-08-01 08:39:17 [21462] [ERROR] Error handling request
Traceback (most recent call last):
File "/opt/virtualenv/python-2.7.5/django-1.5.5/lib/python2.7/site-packages/gunicorn/workers/sync.py", line 93, in handle
self.handle_request(listener, req, client, addr)
File "/opt/virtualenv/python-2.7.5/django-1.5.5/lib/python2.7/site-packages/gunicorn/workers/sync.py", line 134, in handle_request
respiter = self.wsgi(environ, resp.start_response)
File "/opt/sites/itracker/wsgi.py", line 18, in application
os.environ['SERVER_ENV'] = environ['SERVER_ENV']
KeyError: 'SERVER_ENV'
Printing out the environ values confirms that the environment variables are not being passed in. I've tried setting the environment variables in the virtualenv activation script and also in a dedicated gunicorn shell script and also by setting the environment variables using the --env flag but nothing seems to work.
Any ideas?

I got a similar problem when deploying gunicorn.service. And I passed environment variables to it by a file:
<on gunicorn.service>
[Service]
...
EnvironmentFile=/pathto/somefilewith_secrets
...
For example (cat /etc/systemd/system/gunicorn.service)
[Unit]
Description=gunicorn daemon
After=network.target
[Service]
User=ubuntu
Group=ubuntu
WorkingDirectory=/home/ubuntu/10008/digichainOpen
EnvironmentFile=/home/ubuntu/10008/digichainOpen/.env
ExecStart=/home/ubuntu/.local/share/virtualenvs/digichainOpen-Zk2Jnvjv/bin/gunicorn \
--worker-class=gevent --workers 4 \
--bind unix:/home/ubuntu/10008/digichainOpen/gunicorn.sock digichainOpen.wsgi:application
[Install]
WantedBy=multi-user.target
and the .env file can be:
my_var=someValue
some_secret=secretvalue
another_secret=blah

You just have to export your environment variable.
export SERVER_ENV=TEST
export SERVER_ID=TEST
gunicorn -b 127.0.0.1:8080 --error-logfile - --access-logfile - app.wsgi:application
And in your code you can get them like that
os.getenv('SERVER_ENV')

If you want to run Django using gunicorn config file:
Write a config.py file
command = 'venv/bin/gunicorn'
pythonpath = 'venv'
bind = '127.0.0.1:8000'
workers = 2
raw_env = ["VARIABLE_HERE=VARIABLE_VALUE_HERE"]
wsgi_app = "project.wsgi"
Run it like this:
From inside the project directory
gunicorn -c config.py

I don't quite understand what you are trying to do here. If you pass environment variables in the bash command line, they are already in os.environ: there is no need to get them from anywhere else. The environ dictionary is made up of elements passed from the request, not the shell.

Related

How to pass broker_url from Django settings.py to a Celery service

I have Celery running as a service on Ubuntu 20.04 with RabbitMQ as a broker.
Celery repeatedly restarts because it cannot access the RabbitMQ url (RABBITMQ_BROKER), a variable held in a settings.py outside of the Django root directory.
The same happens if I try to initiate celery via command line.
I have confirmed that the variable is accessible from within Django from a views.py print statement.
If I place the RABBITMQ_BROKER variable inside the settings.py within the Django root celery works.
My question is, how do I get celery to recognise the variable RABBITMQ_BROKER when it is placed in /etc/opt/mydjangoproject/settings.py?
My celery.py file:
import os
from celery import Celery
os.environ.setdefault('DJANGO_SETTINGS_MODULE', 'mydjangoproject.settings')
app = Celery('mydjangoproject')
default_config = 'mydjangoproject.celery_config'
app.config_from_object(default_config)
app.autodiscover_tasks()
My celery_config.py file:
from django.conf import settings
broker_url = settings.RABBITMQ_BROKER
etc...
The settings.py in /etc/opt/mydjangoproject/ (non relevant stuff deleted):
from mydangoproject.settings import *
RABBITMQ_BROKER = 'amqp://rabbitadmin:somepassword#somepassword#webserver:5672/mydangoproject'
etc...
My /etc/systemd/system/celery.service file:
[Unit]
Description=Celery Service
After=network.target
[Service]
Type=forking
User=DJANGO_USER
Group=DJANGO_USER
EnvironmentFile=/etc/conf.d/celery
WorkingDirectory=/opt/mydjangoproject
ExecStart=/bin/sh -c '${CELERY_BIN} -A $CELERY_APP multi start $CELERYD_NODES \
--pidfile=${CELERYD_PID_FILE} --logfile=${CELERYD_LOG_FILE} \
--loglevel="${CELERYD_LOG_LEVEL}" $CELERYD_OPTS'
ExecStop=/bin/sh -c '${CELERY_BIN} multi stopwait $CELERYD_NODES \
--pidfile=${CELERYD_PID_FILE} --loglevel="${CELERYD_LOG_LEVEL}"'
ExecReload=/bin/sh -c '${CELERY_BIN} -A $CELERY_APP multi restart $CELERYD_NODES \
--pidfile=${CELERYD_PID_FILE} --logfile=${CELERYD_LOG_FILE} \
--loglevel="${CELERYD_LOG_LEVEL}" $CELERYD_OPTS'
Restart=always
[Install]
WantedBy=multi-user.target
My /etc/conf.d/celery file:
CELERYD_NODES="worker"
CELERY_BIN="/opt/mydjangoproject/venv/bin/celery"
CELERY_APP="mydjangoproject"
CELERYD_CHDIR="/opt/mydjangoproject/"
CELERYD_MULTI="multi"
CELERYD_OPTS="--time-limit=300 --without-heartbeat --without-gossip --without-mingle"
CELERYD_PID_FILE="/run/celery/%n.pid"
CELERYD_LOG_FILE="/var/log/celery/%n%I.log"
CELERYD_LOG_LEVEL="INFO"
Add the following line to the end of /etc/opt/mydjangoproject/settings.py to have celery pick up the correct broker url (casing might vary based on the version of celery you are using):
BROKER_URL = broker_url = RABBITMQ_BROKER
This will put the configuration in a place where it will be read by the call to celery's config_from_object function.
Next, you will also have to add an environment variable to your systemd unit. Since you are accessing settings as mydjangoproject.settings, you have to make the parent of the mydjangoproject directory accessible in the PYTHONPATH:
Environment=PYTHONPATH=/etc/opt
The PYTHONPATH provides python a list of directories to try when trying the import. However, because we have two different directories with the same name that we are using as a single package, we also have to add the following lines to /etc/opt/mydjangoproject/__init__.py and /opt/mydjangoproject/__init__.py:
import pkgutil
__path__ = pkgutil.extend_path(__path__, __name__)
I solved this by adding the following to /etc/systemd/system/celery.service
Environment="PYTHONPATH=/etc/opt/mydjangoproject:/opt/mydjangoproject"
Environment="DJANGO_SETTINGS_MODULE=settings"

raise ConnectionError(self._error_message(e)) kombu.exceptions.OperationalError: Error 111 connecting to localhost:6379. Connection refused

minimal django/celery/redis is running locally, but when deployed to heroku gives me the following error, when I run on python:
raise ConnectionError(self._error_message(e))
kombu.exceptions.OperationalError: Error 111 connecting to localhost:6379. Connection
refused.
This is my tasks.py file in my application directory:
from celery import Celery
import os
app = Celery('tasks', broker='redis://localhost:6379/0')
app.conf.update(BROKER_URL=os.environ['REDIS_URL'],
CELERY_RESULT_BACKEND=os.environ['REDIS_URL'])
#app.task
def add(x, y):
return x + y
Requirements.txt:
django
gunicorn
django-heroku
celery
redis
celery-with-redis
django-celery
kombu
I have set worker dyno to 1.
Funny things is i could have sworn it was working before, now it doesnt work for some reason.
Once, you have a minimal django-celery-redis project setup on local, here is how you deploy it on heroku:
Add to your tasks.py:
import os
app.conf.update(BROKER_URL=os.environ['REDIS_URL'],
CELERY_RESULT_BACKEND=os.environ['REDIS_URL'])
Make sure your requirements.txt is like this:
django
gunicorn
django-heroku
celery
redis
Add to your Procfile: "worker: celery worker --app=hello.tasks.app"
Make sure it still runs on local
enter into terminal: "export REDIS_URL=redis://"
run "heroku local&"
run python
import hello.tasks
hello.tasks.add.delay(1,2)
Should return something like:
<AsyncResult: e1debb39-b61c-47bc-bda3-ee037d34a6c4>
"heroku apps:create minimal-django-celery-redis"
"heroku addons:create heroku-redis -a minimal-django-celery-redis"
"git add ."
"git commit -m "Demo""
"git push heroku master"
"heroku open&"
"heroku ps:scale worker=1"
"heroku run python"
import hello.tasks
hello.tasks.add.delay(1, 2)
You should see the task running in the application logs: "heroku logs -t -p worker"
This solved it for me, i forgot to import celery in project/init.py like so
from .celery import app as celery_app
__all__ = ("celery_app",)

How to set a conda env with uwsgi and supervisor?

I'm trying to run a Flask app using a conda env with uwsgi and supervisor.
I managed to solve a first issue regarding the path of the wsgi script, but I cannot find how to set the conda env.
My uwsgi config file /home/me/Development/flask/myflaskapp/myflaskapp.ini is
[uwsgi]
module = wsgi
master = true
process = 2
chmod-socket = 666
chdir = /home/me/Development/flask/myflaskapp
socket = /home/me/Development/flask/myflaskapp/run/myflaskapp.sock
callable = app
vacuum = true
and my supervisor config is
[program:uwsgi-myflaskapp]
command=/home/me/Development/miniconda/envs/myflaskapp/bin/uwsgi /home/me/Development/flask/myflaskapp/myflaskapp.ini
autostart=true
autorestart=true
stdout_logfile=/home/me/Development/flask/myflaskapp/log/uwsgi-myflaskapp.log
redirect_stderr=true
exitcodes=0
When I start uwsgi through supervisor I get
*** Operational MODE: single process ***
Traceback (most recent call last):
File "./wsgi.py", line 1, in <module>
from myflaskapp import app
File "./myflaskapp/__init__.py", line 1, in <module>
from flask import Flask
ImportError: No module named flask
So I guess I the conda env is not set. How can I set it?
I had to set PATH in my supervisor config file
environment=PATH=/home/me/Development/miniconda/envs/myflaskapp/bin
You use the -H tag when starting uwsgi from the command line to set the Python path
http://uwsgi-docs.readthedocs.org/en/latest/Options.html#virtualenv
So in your case, in the supervisor config, change your command to:
command=/home/me/Development/miniconda/envs/myflaskapp/bin/uwsgi -H /path/to/your/virtualenv /home/me/Development/flask/myflaskapp/myflaskapp.ini
You can find your virtualenv path with
which python
On the command line with your virtualenv activated.
I know it is late but this should also work
command=bash -c "source /path_to_conda/bin/activate && source activate env_name && program_to_run --config=config_path command"

uwsgi: no app loaded. going in full dynamic mode

In my uwsgi config, I have these options:
[uwsgi]
chmod-socket = 777
socket = 127.0.0.1:9031
plugins = python
pythonpath = /adminserver/
callable = app
master = True
processes = 4
reload-mercy = 8
cpu-affinity = 1
max-requests = 2000
limit-as = 512
reload-on-as = 256
reload-on-rss = 192
no-orphans
vacuum
My app structure looks like this:
/adminserver
app.py
...
My app.py has these bits of code:
app = Flask(__name__)
...
if __name__ == '__main__':
app.run(host='0.0.0.0', port=5003, debug=True)
The result is that when I try to curl my server, I get this error:
Wed Sep 11 23:28:56 2013 - added /adminserver/ to pythonpath.
Wed Sep 11 23:28:56 2013 - *** no app loaded. going in full dynamic mode ***
Wed Sep 11 23:28:56 2013 - *** uWSGI is running in multiple interpreter mode ***
What do the module and callable options do? The docs say:
module, wsgi Argument: string
Load a WSGI module as the application. The module (sans .py) must be
importable, ie. be in PYTHONPATH.
This option may be set with -w from the command line.
callable Argument: string Default: application
Set default WSGI callable name.
Module
A module in Python maps to a file on disk - when you have a directory like this:
/some-dir
module1.py
module2.py
If you start up a python interpreter while the current working directory is /some-dir you will be able to import each of the modules:
some-dir$ python
>>> import module1, module2
# Module1 and Module2 are now imported
Python searches sys.path (and a few other things, see the docs on import for more information) for a file that matches the name you are trying to import. uwsgi uses Python's import process under the covers to load the module that contains your WSGI application.
Callable
The WSGI PEPs (333 and 3333) specify that a WSGI application is a callable that takes two arguments and returns an iterable that yields bytestrings:
# simple_wsgi.py
# The simplest WSGI application
HELLO_WORLD = b"Hello world!\n"
def simple_app(environ, start_response):
"""Simplest possible application object"""
status = '200 OK'
response_headers = [('Content-type', 'text/plain')]
start_response(status, response_headers)
return [HELLO_WORLD]
uwsgi needs to know the name of a symbol inside of your module that maps to the WSGI application callable, so it can pass in the environment and the start_response callable - essentially, it needs to be able to do the following:
wsgi_app = getattr(simple_wsgi, 'simple_app')
TL;PC (Too Long; Prefer Code)
A simple parallel of what uwsgi is doing:
# Use `module` to know *what* to import
import simple_wsgi
# construct request environment from user input
# create a callable to pass for start_response
# and then ...
# use `callable` to know what to call
wsgi_app = getattr(simple_wsgi, 'simple_app')
# and then call it to respond to the user
response = wsgi_app(environ, start_response)
For anyone else having this problem, if you are sure your configuration is correct, you should check your uWSGI version.
Ubuntu 12.04 LTS provides 1.0.3. Removing that and using pip to install 2.0.4 resolved my issues.
First, check your configuration whether is correct.
my uwsgi.ini configuration:
[uwsgi]
chdir=/home/air/repo/Qiy
uid=nobody
gid=nobody
module=Qiy.wsgi:application
socket=/home/air/repo/Qiy/uwsgi.sock
master=true
workers=5
pidfile=/home/air/repo/Qiy/uwsgi.pid
vacuum=true
thunder-lock=true
enable-threads=true
harakiri=30
post-buffering=4096
daemonize=/home/air/repo/Qiy/uwsgi.log
then use uwsgi --ini uwsgi.ini to run uwsgi.
if not work, you rm -rf the venv directory, and re-initial the venv, and re-try my step.
I re-initial the venv solved my issue, seems the problem is when I pip3 install some packages of requirements.txt, and upgrade the pip, then install uwsgi package. so, I delete the venv, and re-initial my virtual environment.

ImportError: No module named '_sysconfigdata_m'

I want to use Django, uwsgi and nginx.
First I want to test uwsgi. I wrote a test.py file:
def application(env, start_response):
start_response('200 OK', [('Content-Type','text/html')])
return "Hello World"
When I typed following command and tried to run uwsgi
uwsgi --http :8001 --wsgi-file test.py
it told me:
ImportError: No module named '_sysconfigdata_m'
I searched a lot, but I didn't find the answer.
I use Ubuntu 13.04, Python 3.3.2 and uwsgi 1.9.14.
Thanks a lot.