How to set a conda env with uwsgi and supervisor? - flask

I'm trying to run a Flask app using a conda env with uwsgi and supervisor.
I managed to solve a first issue regarding the path of the wsgi script, but I cannot find how to set the conda env.
My uwsgi config file /home/me/Development/flask/myflaskapp/myflaskapp.ini is
[uwsgi]
module = wsgi
master = true
process = 2
chmod-socket = 666
chdir = /home/me/Development/flask/myflaskapp
socket = /home/me/Development/flask/myflaskapp/run/myflaskapp.sock
callable = app
vacuum = true
and my supervisor config is
[program:uwsgi-myflaskapp]
command=/home/me/Development/miniconda/envs/myflaskapp/bin/uwsgi /home/me/Development/flask/myflaskapp/myflaskapp.ini
autostart=true
autorestart=true
stdout_logfile=/home/me/Development/flask/myflaskapp/log/uwsgi-myflaskapp.log
redirect_stderr=true
exitcodes=0
When I start uwsgi through supervisor I get
*** Operational MODE: single process ***
Traceback (most recent call last):
File "./wsgi.py", line 1, in <module>
from myflaskapp import app
File "./myflaskapp/__init__.py", line 1, in <module>
from flask import Flask
ImportError: No module named flask
So I guess I the conda env is not set. How can I set it?

I had to set PATH in my supervisor config file
environment=PATH=/home/me/Development/miniconda/envs/myflaskapp/bin

You use the -H tag when starting uwsgi from the command line to set the Python path
http://uwsgi-docs.readthedocs.org/en/latest/Options.html#virtualenv
So in your case, in the supervisor config, change your command to:
command=/home/me/Development/miniconda/envs/myflaskapp/bin/uwsgi -H /path/to/your/virtualenv /home/me/Development/flask/myflaskapp/myflaskapp.ini
You can find your virtualenv path with
which python
On the command line with your virtualenv activated.

I know it is late but this should also work
command=bash -c "source /path_to_conda/bin/activate && source activate env_name && program_to_run --config=config_path command"

Related

Why my project uwsgi.ini is throwing Internal Server Error?

I am configuring a Django Nginx Server. Up to this stage: uwsgi --socket ProjetAgricole.sock --module ProjetAgricole.wsgi --chmod-socket=666 everything is working fine. However, after configuring the .ini file, and run the uwsgi --ini ProjetAgricole_uwsgi.ini file,I am getting this ouput [uWSGI] getting INI configuration from ProjetAgricole_uwsgi.ini. But when I open the app from the browser I am getting Internal Server Error
Here is my .ini file:
[uwsgi]
# Django-related settings
# the base directory (full path)
chdir = /home/dnsae/my_project/ProjetAgricole/
# Django's wsgi file
module = ProjetAgricole.wsgi
# the virtualenv (full path)
home = /home/dnsae/my_project/my_venv
# process-related settings
# master
master = true
# maximum number of worker processes
processes = 10
# the socket (use the full path to be safe
socket = /home/dnsae/my_project/ProjetAgricole/ProjetAgricole.sock
# ... with appropriate permissions - may be needed
chmod-socket = 666
# clear environment on exit
vacuum = true
# daemonize uwsgi and write message into given log
daemonize = /home/dnsae/my_project/uwsgi-emperor.log
I restarted the server but still I am getting the same error.
Please assist me.

(internal error) Ngnix + uwsgi +django : no module named django

So im trying to deploy a django app with nginx and uwsgi , ngnix is running well and the django app is working well with manage.py runserver , but when i try to deploy it with uwsgi it said Internal Server Error , and when i check my uwsgi logs that what i get (No module named django)
the virtualenv python version is 3.6.9 ,i'dont know if this error is caused because of imcompatibilty with python version of virtual environement and the Uwsgi one or just because i missed somethings , my uwsgi specs are
this is my uwsgi ini file :
[uwsgi]
vhost = true
plugins = python
socket = /tmp/mainsock
master = true
enable-threads = true
processes = 4
wsgi-file = /var/www/PTapp-launch/ptapp/wsgi.py
virtualenv = /var/www/venv/site
chdir = /var/www/PTapp-launch
touch-reload = /var/www/PTapp-launch/reload
env = DJANGO_ENV=production
env = DATABASE_NAME=personal_trainer
env = DATABASE_USER=postgres
env = DATABASE_PASSWORD=********
env = DATABASE_HOST=localhost
env = DATABASE_PORT=5432
env = ALLOWED_HOSTS=141.***.***.***
i have finally found the problem , when i was running manage.py runserver i had to use sudo , but when i didnt it was throwing the same error , (no module named Django) , so what i did its i uninstall all requirements and then use super user to create new virtual environnement and link it in uwsgi.ini, i also restart both of nginx and uwsgi, so everything work fine now ...

django-celery as a daemon: not working

I have a website project written with django, celery and rabbitmq. And a '.delay' task (the task creates a new folder) is called when a button is clicked.
Everything works fine with celery (the .delay task is called, and a new folder is created) when I run celery with manage.py like:
python manage.py celeryd
However, when I ran celery as the daemon, even there was no error, the task was not executed (no folder was created).
I was kind of following the tutorial: http://www.arruda.blog.br/programacao/django-celery-in-daemon/
My settings are:
/etc/default/celeryd
:
# Name of nodes to start, here we have a single node
CELERYD_NODES="w1"
# Where to chdir at start.
CELERYD_CHDIR="/var/www/myproject"
# How to call "manage.py celeryd_multi"
CELERYD_MULTI="$CELERYD_CHDIR/manage.py celeryd_multi"
# How to call "manage.py celeryctl"
CELERYCTL="$CELERYD_CHDIR/manage.py celeryctl"
# Extra arguments to celeryd
CELERYD_OPTS=""
# Name of the celery config module.
CELERY_CONFIG_MODULE="myproject.settings"
# %n will be replaced with the nodename.
CELERYD_LOG_FILE="/var/log/celery/w1.log"
CELERYD_PID_FILE="/var/run/celery/w1.pid"
# Workers should run as an unprivileged user.
#CELERYD_USER="root"
#CELERYD_GROUP="root"
# Name of the projects settings module.
export DJANGO_SETTINGS_MODULE="myproject.settings"
the correlated folders are created too
for the '/etc/default/celeryd/init.d' file, I used this version:
https://raw.github.com/ask/celery/1da3aa43d1e6de525beeda398d0acb8841d5b4d2/contrib/generic-init.d/celeryd
for /var/www/myproject/myproject/settings.py, I have:
:
import djcelery
djcelery.setup_loader()
BROKER_HOST = "127.0.0.1"
BROKER_PORT = 5672
BROKER_VHOST = "/"
BROKER_USER = "guest"
BROKER_PASSWORD = "guest"
INSTALLED_APPS = (
'djcelery',
...
)
There was no error when I start celery by using:
/etc/init.d/celeryd start
and no results neither. Does someone know how to fix the problem?
Celery's docs have a daemon troubleshooting section that might be helpful. Celery has a flag that lets you run your init script without actually daemonizing, and that should show what's going wrong:
C_FAKEFORK=1 sh -x /etc/init.d/celeryd start
Newer versions of that init script have a dryrun command that's an easier-to-remember way to run the start command without daemonizing.

Gunicorn Environment Variable Setting

I'm currently having difficulty passing environment variables into Gunicorn for my Django project. I'm on the latest 19.1 version. I have a wsgi.py file like so:
import os
import sys
from django.core.wsgi import get_wsgi_application
BASE_DIR = os.path.dirname(os.path.abspath(__file__))
PROJECT_DIR = os.path.abspath(os.path.join(BASE_DIR, '..'))
sys.path.append(PROJECT_DIR)
os.environ.setdefault("DJANGO_SETTINGS_MODULE", "app.settings")
def application(environ, start_response):
_application = get_wsgi_application()
os.environ['SERVER_ENV'] = environ['SERVER_ENV']
os.environ['SERVER_ID'] = environ['SERVER_ID']
return _application(environ, start_response)
When I run gunicorn from the command line as:
SERVER_ENV=TEST SERVER_ID=TEST gunicorn -b 127.0.0.1:8080 --error-logfile - --access-logfile - app.wsgi:application
and I then pass a request to gunicorn I keep getting:
2014-08-01 08:39:17 [21462] [ERROR] Error handling request
Traceback (most recent call last):
File "/opt/virtualenv/python-2.7.5/django-1.5.5/lib/python2.7/site-packages/gunicorn/workers/sync.py", line 93, in handle
self.handle_request(listener, req, client, addr)
File "/opt/virtualenv/python-2.7.5/django-1.5.5/lib/python2.7/site-packages/gunicorn/workers/sync.py", line 134, in handle_request
respiter = self.wsgi(environ, resp.start_response)
File "/opt/sites/itracker/wsgi.py", line 18, in application
os.environ['SERVER_ENV'] = environ['SERVER_ENV']
KeyError: 'SERVER_ENV'
Printing out the environ values confirms that the environment variables are not being passed in. I've tried setting the environment variables in the virtualenv activation script and also in a dedicated gunicorn shell script and also by setting the environment variables using the --env flag but nothing seems to work.
Any ideas?
I got a similar problem when deploying gunicorn.service. And I passed environment variables to it by a file:
<on gunicorn.service>
[Service]
...
EnvironmentFile=/pathto/somefilewith_secrets
...
For example (cat /etc/systemd/system/gunicorn.service)
[Unit]
Description=gunicorn daemon
After=network.target
[Service]
User=ubuntu
Group=ubuntu
WorkingDirectory=/home/ubuntu/10008/digichainOpen
EnvironmentFile=/home/ubuntu/10008/digichainOpen/.env
ExecStart=/home/ubuntu/.local/share/virtualenvs/digichainOpen-Zk2Jnvjv/bin/gunicorn \
--worker-class=gevent --workers 4 \
--bind unix:/home/ubuntu/10008/digichainOpen/gunicorn.sock digichainOpen.wsgi:application
[Install]
WantedBy=multi-user.target
and the .env file can be:
my_var=someValue
some_secret=secretvalue
another_secret=blah
You just have to export your environment variable.
export SERVER_ENV=TEST
export SERVER_ID=TEST
gunicorn -b 127.0.0.1:8080 --error-logfile - --access-logfile - app.wsgi:application
And in your code you can get them like that
os.getenv('SERVER_ENV')
If you want to run Django using gunicorn config file:
Write a config.py file
command = 'venv/bin/gunicorn'
pythonpath = 'venv'
bind = '127.0.0.1:8000'
workers = 2
raw_env = ["VARIABLE_HERE=VARIABLE_VALUE_HERE"]
wsgi_app = "project.wsgi"
Run it like this:
From inside the project directory
gunicorn -c config.py
I don't quite understand what you are trying to do here. If you pass environment variables in the bash command line, they are already in os.environ: there is no need to get them from anywhere else. The environ dictionary is made up of elements passed from the request, not the shell.

uwsgi: no app loaded. going in full dynamic mode

In my uwsgi config, I have these options:
[uwsgi]
chmod-socket = 777
socket = 127.0.0.1:9031
plugins = python
pythonpath = /adminserver/
callable = app
master = True
processes = 4
reload-mercy = 8
cpu-affinity = 1
max-requests = 2000
limit-as = 512
reload-on-as = 256
reload-on-rss = 192
no-orphans
vacuum
My app structure looks like this:
/adminserver
app.py
...
My app.py has these bits of code:
app = Flask(__name__)
...
if __name__ == '__main__':
app.run(host='0.0.0.0', port=5003, debug=True)
The result is that when I try to curl my server, I get this error:
Wed Sep 11 23:28:56 2013 - added /adminserver/ to pythonpath.
Wed Sep 11 23:28:56 2013 - *** no app loaded. going in full dynamic mode ***
Wed Sep 11 23:28:56 2013 - *** uWSGI is running in multiple interpreter mode ***
What do the module and callable options do? The docs say:
module, wsgi Argument: string
Load a WSGI module as the application. The module (sans .py) must be
importable, ie. be in PYTHONPATH.
This option may be set with -w from the command line.
callable Argument: string Default: application
Set default WSGI callable name.
Module
A module in Python maps to a file on disk - when you have a directory like this:
/some-dir
module1.py
module2.py
If you start up a python interpreter while the current working directory is /some-dir you will be able to import each of the modules:
some-dir$ python
>>> import module1, module2
# Module1 and Module2 are now imported
Python searches sys.path (and a few other things, see the docs on import for more information) for a file that matches the name you are trying to import. uwsgi uses Python's import process under the covers to load the module that contains your WSGI application.
Callable
The WSGI PEPs (333 and 3333) specify that a WSGI application is a callable that takes two arguments and returns an iterable that yields bytestrings:
# simple_wsgi.py
# The simplest WSGI application
HELLO_WORLD = b"Hello world!\n"
def simple_app(environ, start_response):
"""Simplest possible application object"""
status = '200 OK'
response_headers = [('Content-type', 'text/plain')]
start_response(status, response_headers)
return [HELLO_WORLD]
uwsgi needs to know the name of a symbol inside of your module that maps to the WSGI application callable, so it can pass in the environment and the start_response callable - essentially, it needs to be able to do the following:
wsgi_app = getattr(simple_wsgi, 'simple_app')
TL;PC (Too Long; Prefer Code)
A simple parallel of what uwsgi is doing:
# Use `module` to know *what* to import
import simple_wsgi
# construct request environment from user input
# create a callable to pass for start_response
# and then ...
# use `callable` to know what to call
wsgi_app = getattr(simple_wsgi, 'simple_app')
# and then call it to respond to the user
response = wsgi_app(environ, start_response)
For anyone else having this problem, if you are sure your configuration is correct, you should check your uWSGI version.
Ubuntu 12.04 LTS provides 1.0.3. Removing that and using pip to install 2.0.4 resolved my issues.
First, check your configuration whether is correct.
my uwsgi.ini configuration:
[uwsgi]
chdir=/home/air/repo/Qiy
uid=nobody
gid=nobody
module=Qiy.wsgi:application
socket=/home/air/repo/Qiy/uwsgi.sock
master=true
workers=5
pidfile=/home/air/repo/Qiy/uwsgi.pid
vacuum=true
thunder-lock=true
enable-threads=true
harakiri=30
post-buffering=4096
daemonize=/home/air/repo/Qiy/uwsgi.log
then use uwsgi --ini uwsgi.ini to run uwsgi.
if not work, you rm -rf the venv directory, and re-initial the venv, and re-try my step.
I re-initial the venv solved my issue, seems the problem is when I pip3 install some packages of requirements.txt, and upgrade the pip, then install uwsgi package. so, I delete the venv, and re-initial my virtual environment.