(internal error) Ngnix + uwsgi +django : no module named django - django

So im trying to deploy a django app with nginx and uwsgi , ngnix is running well and the django app is working well with manage.py runserver , but when i try to deploy it with uwsgi it said Internal Server Error , and when i check my uwsgi logs that what i get (No module named django)
the virtualenv python version is 3.6.9 ,i'dont know if this error is caused because of imcompatibilty with python version of virtual environement and the Uwsgi one or just because i missed somethings , my uwsgi specs are
this is my uwsgi ini file :
[uwsgi]
vhost = true
plugins = python
socket = /tmp/mainsock
master = true
enable-threads = true
processes = 4
wsgi-file = /var/www/PTapp-launch/ptapp/wsgi.py
virtualenv = /var/www/venv/site
chdir = /var/www/PTapp-launch
touch-reload = /var/www/PTapp-launch/reload
env = DJANGO_ENV=production
env = DATABASE_NAME=personal_trainer
env = DATABASE_USER=postgres
env = DATABASE_PASSWORD=********
env = DATABASE_HOST=localhost
env = DATABASE_PORT=5432
env = ALLOWED_HOSTS=141.***.***.***

i have finally found the problem , when i was running manage.py runserver i had to use sudo , but when i didnt it was throwing the same error , (no module named Django) , so what i did its i uninstall all requirements and then use super user to create new virtual environnement and link it in uwsgi.ini, i also restart both of nginx and uwsgi, so everything work fine now ...

Related

Why my project uwsgi.ini is throwing Internal Server Error?

I am configuring a Django Nginx Server. Up to this stage: uwsgi --socket ProjetAgricole.sock --module ProjetAgricole.wsgi --chmod-socket=666 everything is working fine. However, after configuring the .ini file, and run the uwsgi --ini ProjetAgricole_uwsgi.ini file,I am getting this ouput [uWSGI] getting INI configuration from ProjetAgricole_uwsgi.ini. But when I open the app from the browser I am getting Internal Server Error
Here is my .ini file:
[uwsgi]
# Django-related settings
# the base directory (full path)
chdir = /home/dnsae/my_project/ProjetAgricole/
# Django's wsgi file
module = ProjetAgricole.wsgi
# the virtualenv (full path)
home = /home/dnsae/my_project/my_venv
# process-related settings
# master
master = true
# maximum number of worker processes
processes = 10
# the socket (use the full path to be safe
socket = /home/dnsae/my_project/ProjetAgricole/ProjetAgricole.sock
# ... with appropriate permissions - may be needed
chmod-socket = 666
# clear environment on exit
vacuum = true
# daemonize uwsgi and write message into given log
daemonize = /home/dnsae/my_project/uwsgi-emperor.log
I restarted the server but still I am getting the same error.
Please assist me.

Logging in Django does not work when using uWSGI

I have a problem logging into a file using python built-in module.
Here is an example of how logs are generated:
logging.info('a log message')
Logging works fine when running the app directly through Python. However when running the app through uWSGI, logging does not work.
Here is my uWSGI configuration:
[uwsgi]
module = myapp.app:application
master = true
processes = 5
uid = nginx
socket = /run/uwsgi/myapp.sock
chown-socket = nginx:nginx
chmod-socket = 660
vacuum = true
die-on-term = true
logto = /var/log/myapp/myapp.log
logfile-chown = nginx:nginx
logfile-chmod = 640
EDIT:
The path /var/log/myapp/myapp.log is logging nginx access logs. There is another path configured in a settings.py file. The 2nd path is where application logs are ment to go. But there are non when using uWSGI.
Thanks in advance

uwsgi and flask - cannot import name "appl"

I created several servers, without any issue, with the stack nginx - uwsgi - flask using virtualenv.
with the current one uwsgi is throwing the error cannot import name "appl"
here is the myapp directory structure:
/srv/www/myapp
+ run.py
+ venv/ # virtualenv
+ myapp/
+ init.py
+ other modules/
+ logs/
here is the /etc/uwsgi/apps-avaliable/myapp.ini
[uwsgi]
# Variables
base = /srv/www/myapp
app = run
# Generic Config
# plugins = http, python
# plugins = python
home = %(base)/venv
pythonpath = %(base)
socket = /tmp/%n.sock
module = %(app)
callable = appl
logto = %(base)/logs/uwsgi_%n.log
and this is run.py
#!/usr/bin/env python
from myapp import appl
if __name__ == '__main__':
DEBUG = True if appl.config['DEBUG'] else False
appl.run(debug=DEBUG)
appl is defined in myapp/ _ init _ .py as an instance of Flask()
(underscores spaced just to prevent SO to turn them into bold)
I accurately checked the python code and indeed if I activate manually the virtualenvironment and execute run.py manually everything works like a charm, but uwsgi keeps throwing the import error.
Any suggestion what should I search more ?
fixed it, it was just a read permissions issue. The whole python app was readable by my user but not by the group, therefore uwsgi could not find it.
This was a bit tricky because I deployed successfully many time with the same script and never had permissions issues

uwsgi: no app loaded. going in full dynamic mode

In my uwsgi config, I have these options:
[uwsgi]
chmod-socket = 777
socket = 127.0.0.1:9031
plugins = python
pythonpath = /adminserver/
callable = app
master = True
processes = 4
reload-mercy = 8
cpu-affinity = 1
max-requests = 2000
limit-as = 512
reload-on-as = 256
reload-on-rss = 192
no-orphans
vacuum
My app structure looks like this:
/adminserver
app.py
...
My app.py has these bits of code:
app = Flask(__name__)
...
if __name__ == '__main__':
app.run(host='0.0.0.0', port=5003, debug=True)
The result is that when I try to curl my server, I get this error:
Wed Sep 11 23:28:56 2013 - added /adminserver/ to pythonpath.
Wed Sep 11 23:28:56 2013 - *** no app loaded. going in full dynamic mode ***
Wed Sep 11 23:28:56 2013 - *** uWSGI is running in multiple interpreter mode ***
What do the module and callable options do? The docs say:
module, wsgi Argument: string
Load a WSGI module as the application. The module (sans .py) must be
importable, ie. be in PYTHONPATH.
This option may be set with -w from the command line.
callable Argument: string Default: application
Set default WSGI callable name.
Module
A module in Python maps to a file on disk - when you have a directory like this:
/some-dir
module1.py
module2.py
If you start up a python interpreter while the current working directory is /some-dir you will be able to import each of the modules:
some-dir$ python
>>> import module1, module2
# Module1 and Module2 are now imported
Python searches sys.path (and a few other things, see the docs on import for more information) for a file that matches the name you are trying to import. uwsgi uses Python's import process under the covers to load the module that contains your WSGI application.
Callable
The WSGI PEPs (333 and 3333) specify that a WSGI application is a callable that takes two arguments and returns an iterable that yields bytestrings:
# simple_wsgi.py
# The simplest WSGI application
HELLO_WORLD = b"Hello world!\n"
def simple_app(environ, start_response):
"""Simplest possible application object"""
status = '200 OK'
response_headers = [('Content-type', 'text/plain')]
start_response(status, response_headers)
return [HELLO_WORLD]
uwsgi needs to know the name of a symbol inside of your module that maps to the WSGI application callable, so it can pass in the environment and the start_response callable - essentially, it needs to be able to do the following:
wsgi_app = getattr(simple_wsgi, 'simple_app')
TL;PC (Too Long; Prefer Code)
A simple parallel of what uwsgi is doing:
# Use `module` to know *what* to import
import simple_wsgi
# construct request environment from user input
# create a callable to pass for start_response
# and then ...
# use `callable` to know what to call
wsgi_app = getattr(simple_wsgi, 'simple_app')
# and then call it to respond to the user
response = wsgi_app(environ, start_response)
For anyone else having this problem, if you are sure your configuration is correct, you should check your uWSGI version.
Ubuntu 12.04 LTS provides 1.0.3. Removing that and using pip to install 2.0.4 resolved my issues.
First, check your configuration whether is correct.
my uwsgi.ini configuration:
[uwsgi]
chdir=/home/air/repo/Qiy
uid=nobody
gid=nobody
module=Qiy.wsgi:application
socket=/home/air/repo/Qiy/uwsgi.sock
master=true
workers=5
pidfile=/home/air/repo/Qiy/uwsgi.pid
vacuum=true
thunder-lock=true
enable-threads=true
harakiri=30
post-buffering=4096
daemonize=/home/air/repo/Qiy/uwsgi.log
then use uwsgi --ini uwsgi.ini to run uwsgi.
if not work, you rm -rf the venv directory, and re-initial the venv, and re-try my step.
I re-initial the venv solved my issue, seems the problem is when I pip3 install some packages of requirements.txt, and upgrade the pip, then install uwsgi package. so, I delete the venv, and re-initial my virtual environment.

Install new relic Django 1.5 in virtualenv with supervisord

I've a completely good running Django site on a production server with Django 1.5 inside a virtualenv and controlled with supervisord.
I cannot however get the new relic monitoring going. Everything starts fine, but my application is not showing in the new relic dashboard.
here is my supervisor config:
[program:<PROJECTNAME>]
process_name=gunicorn
directory=/var/www/<PROJECTNAME>/<PROJECTNAME>
environment=
DJANGO_SETTINGS_MODULE='settings.prod',
SECRET_KEY='xxx',
DB_USER='xxx',
DB_PASSWD='xxx',
NEW_RELIC_CONFIG_FILE="/var/www/<PROJECTNAME>/newrelic.ini"
command=/var/www/<PROJECTNAME>/env/bin/newrelic-admin run-program /var/www/<PROJECTNAME>/env/bin/gunicorn wsgi:application -c /var/www/<PROJECTNAME>/<PROJECTNAME>/gunicorn_settings.py
group=www-data
autostart=True
stdout_logfile = /var/log/webapps/<PROJECTNAME>/gunicorn.log
logfile_maxbytes = 100MB
redirect_stderr=True
And this is the gunicorn_settings config file:
pythonpath = '/var/www/<PROJECTNAME>/'
pidfile = '/tmp/<PROJECTNAME>.pid'
user = 'www-data'
group = 'www-data'
proc_name = '<PROJECTNAME'
workers = 2
bind = 'unix:/tmp/gunicorn-<PROJECTNAME>.sock'
stdout_logfile = '/var/log/gunicorn/<PROJECTNAME>.log'
loglevel = 'debug'
debug = True
wsgi.py contains an extra pythonpath /var/www/
I've another Django 1.2 site running inside a virtualenv with supervisor and new relic on the same server just fine.