Logging in Django does not work when using uWSGI - django

I have a problem logging into a file using python built-in module.
Here is an example of how logs are generated:
logging.info('a log message')
Logging works fine when running the app directly through Python. However when running the app through uWSGI, logging does not work.
Here is my uWSGI configuration:
[uwsgi]
module = myapp.app:application
master = true
processes = 5
uid = nginx
socket = /run/uwsgi/myapp.sock
chown-socket = nginx:nginx
chmod-socket = 660
vacuum = true
die-on-term = true
logto = /var/log/myapp/myapp.log
logfile-chown = nginx:nginx
logfile-chmod = 640
EDIT:
The path /var/log/myapp/myapp.log is logging nginx access logs. There is another path configured in a settings.py file. The 2nd path is where application logs are ment to go. But there are non when using uWSGI.
Thanks in advance

Related

Why my project uwsgi.ini is throwing Internal Server Error?

I am configuring a Django Nginx Server. Up to this stage: uwsgi --socket ProjetAgricole.sock --module ProjetAgricole.wsgi --chmod-socket=666 everything is working fine. However, after configuring the .ini file, and run the uwsgi --ini ProjetAgricole_uwsgi.ini file,I am getting this ouput [uWSGI] getting INI configuration from ProjetAgricole_uwsgi.ini. But when I open the app from the browser I am getting Internal Server Error
Here is my .ini file:
[uwsgi]
# Django-related settings
# the base directory (full path)
chdir = /home/dnsae/my_project/ProjetAgricole/
# Django's wsgi file
module = ProjetAgricole.wsgi
# the virtualenv (full path)
home = /home/dnsae/my_project/my_venv
# process-related settings
# master
master = true
# maximum number of worker processes
processes = 10
# the socket (use the full path to be safe
socket = /home/dnsae/my_project/ProjetAgricole/ProjetAgricole.sock
# ... with appropriate permissions - may be needed
chmod-socket = 666
# clear environment on exit
vacuum = true
# daemonize uwsgi and write message into given log
daemonize = /home/dnsae/my_project/uwsgi-emperor.log
I restarted the server but still I am getting the same error.
Please assist me.

(internal error) Ngnix + uwsgi +django : no module named django

So im trying to deploy a django app with nginx and uwsgi , ngnix is running well and the django app is working well with manage.py runserver , but when i try to deploy it with uwsgi it said Internal Server Error , and when i check my uwsgi logs that what i get (No module named django)
the virtualenv python version is 3.6.9 ,i'dont know if this error is caused because of imcompatibilty with python version of virtual environement and the Uwsgi one or just because i missed somethings , my uwsgi specs are
this is my uwsgi ini file :
[uwsgi]
vhost = true
plugins = python
socket = /tmp/mainsock
master = true
enable-threads = true
processes = 4
wsgi-file = /var/www/PTapp-launch/ptapp/wsgi.py
virtualenv = /var/www/venv/site
chdir = /var/www/PTapp-launch
touch-reload = /var/www/PTapp-launch/reload
env = DJANGO_ENV=production
env = DATABASE_NAME=personal_trainer
env = DATABASE_USER=postgres
env = DATABASE_PASSWORD=********
env = DATABASE_HOST=localhost
env = DATABASE_PORT=5432
env = ALLOWED_HOSTS=141.***.***.***
i have finally found the problem , when i was running manage.py runserver i had to use sudo , but when i didnt it was throwing the same error , (no module named Django) , so what i did its i uninstall all requirements and then use super user to create new virtual environnement and link it in uwsgi.ini, i also restart both of nginx and uwsgi, so everything work fine now ...

uwsgi emperor share and use a common default ini within projects

I'm using the uwsgi Emperor to manage multiple Flask apps, and for each individual app, I have several ini configs, e.g. local.ini, production.ini that I want to share common ini features from a central default.ini. And I'd like be able to do this for multiple projects, i.e. different projects can have different defaults. However, I cannot get the emperor to follow the symlinks to recognize the default.ini file. How do I get the emperor to follow symlinks so each project can understand its default.ini location? I always get the error
realpath() of default.ini failed: No such file or directory
emperor.ini setup file
[uwsgi]
emperor = /www/uwsgi/vassals
emperor-broodlord = 40
logto = /www/uwsgi/log/emperor.log
master = true
thunder-lock = true
enable-threads = true
stats = /tmp/empstats.sock
die-on-term = true
My vassals directory is set up with symlinks that point to the respective projects ini files
uwsgi_app1.ini -> /path/to/project1/local.ini
uwsgi_app2.ini -> /path/to/project2/local.ini
For each separate project, I have a default.ini which contains common settings I want to use for both local and production environments, for that project.
Project1
default.ini
[uwsgi]
file = %(wwwdir)/%(module)/run_%(app_name)
daemonize = %(wwwdir)/%(module)/log/%(app_name).log
socket = %(socketdir)/uwsgi_%(app_name).sock
stats = %(socketdir)/%(module)_stats.sock
master = true
processes = 4
chmod-socket = 666
vacuum = true
thunder-lock = true
enable-threads = true
lazy-apps = true
with local.ini and production.ini, having slightly different variables, being
local
[uwsgi]
callable = app
wwwdir = /localwww/
socketdir = /tmp
module = app1
app_name = app1
ini = default.ini
production
[uwsgi]
callable = app
wwwdir = /home/var-www/
socketdir = /run/uwsgi/tmp
module = app1
app_name = app1
ini = default.ini
If I use the %p magic variable to specify the absolute path, it instead points to the emperor vassal directory www/uwsgi/vassals/default.ini, rather than the real path.
I am using a full path to the default.ini in the project file...
ini = /path/to/my/project/config/uwsgi/defaults.ini

Install new relic Django 1.5 in virtualenv with supervisord

I've a completely good running Django site on a production server with Django 1.5 inside a virtualenv and controlled with supervisord.
I cannot however get the new relic monitoring going. Everything starts fine, but my application is not showing in the new relic dashboard.
here is my supervisor config:
[program:<PROJECTNAME>]
process_name=gunicorn
directory=/var/www/<PROJECTNAME>/<PROJECTNAME>
environment=
DJANGO_SETTINGS_MODULE='settings.prod',
SECRET_KEY='xxx',
DB_USER='xxx',
DB_PASSWD='xxx',
NEW_RELIC_CONFIG_FILE="/var/www/<PROJECTNAME>/newrelic.ini"
command=/var/www/<PROJECTNAME>/env/bin/newrelic-admin run-program /var/www/<PROJECTNAME>/env/bin/gunicorn wsgi:application -c /var/www/<PROJECTNAME>/<PROJECTNAME>/gunicorn_settings.py
group=www-data
autostart=True
stdout_logfile = /var/log/webapps/<PROJECTNAME>/gunicorn.log
logfile_maxbytes = 100MB
redirect_stderr=True
And this is the gunicorn_settings config file:
pythonpath = '/var/www/<PROJECTNAME>/'
pidfile = '/tmp/<PROJECTNAME>.pid'
user = 'www-data'
group = 'www-data'
proc_name = '<PROJECTNAME'
workers = 2
bind = 'unix:/tmp/gunicorn-<PROJECTNAME>.sock'
stdout_logfile = '/var/log/gunicorn/<PROJECTNAME>.log'
loglevel = 'debug'
debug = True
wsgi.py contains an extra pythonpath /var/www/
I've another Django 1.2 site running inside a virtualenv with supervisor and new relic on the same server just fine.

RabbitMQ/Celery/Django Memory Leak?

I recently took over another part of the project that my company is working on and have discovered what seems to be a memory leak in our RabbitMQ/Celery setup.
Our system has 2Gb of memory, with roughly 1.8Gb free at any given time. We have multiple tasks that crunch large amounts of data and add them to our database.
When these tasks run, they consume a rather large amount of memory, quickly plummeting our available memory to anywhere between 16Mb and 300Mb. The problem is, after these tasks finish, the memory does not come back.
We're using:
RabbitMQ v2.7.1
AMQP 0-9-1 / 0-9 / 0-8 (got this line from the
RabbitMQ startup_log)
Celery 2.4.6
Django 1.3.1
amqplib 1.0.2
django-celery 2.4.2
kombu 2.1.0
Python 2.6.6
erlang 5.8
Our server is running Debian 6.0.4.
I am new to this setup, so if there is any other information you need that could help me determine where this problem is coming from, please let me know.
All tasks have return values, all tasks have ignore_result=True, CELERY_IGNORE_RESULT is set to True.
Thank you very much for your time.
My current config file is:
CELERY_TASK_RESULT_EXPIRES = 30
CELERY_MAX_CACHED_RESULTS = 1
CELERY_RESULT_BACKEND = False
CELERY_IGNORE_RESULT = True
BROKER_HOST = 'localhost'
BROKER_PORT = 5672
BROKER_USER = c.celery.u
BROKER_PASSWORD = c.celery.p
BROKER_VHOST = c.celery.vhost
I am almost certain you are running this setup with DEBUG=True wich leads to a memory leak.
Check this post: Disable Django Debugging for Celery.
I'll post my configuration in case it helps.
settings.py
djcelery.setup_loader()
BROKER_HOST = "localhost"
BROKER_PORT = 5672
BROKER_VHOST = "rabbit"
BROKER_USER = "YYYYYY"
BROKER_PASSWORD = "XXXXXXX"
CELERY_IGNORE_RESULT = True
CELERY_DISABLE_RATE_LIMITS = True
CELERY_ACKS_LATE = True
CELERYD_PREFETCH_MULTIPLIER = 1
CELERYBEAT_SCHEDULER = "djcelery.schedulers.DatabaseScheduler"
CELERY_ROUTES = ('FILE_WITH_ROUTES',)
You might be hitting this issue in librabbitmq. Please check whether or not Celery is using librabbitmq>=1.0.1.
A simple fix to try is: pip install librabbitmq>=1.0.1.