Supervisord with django writing separate logs for each program - django

I'm using supervisord (through django-supervisor a thin wrapper around supervisor) to run multiple processes with my Django installation.
My problem is all the logs are written to the supervisord log file (in this example out.log) instead of the different log files.
the conf file (cleaned up):
[supervisord]
logfile=/var/log/server/ourserver/out.log
[rpcinterface:supervisor]
supervisor.rpcinterface_factory = supervisor.rpcinterface:make_main_rpcinterface
[supervisorctl]
serverurl=unix:///var/run/supervisor.sock ; use a unix:// URL for a unix socket
[program:webserver]
command=uwsgi uwsgi.ini
stout_logfile = /var/log/server/ourserver/django.log
redirect_stderr = true
;autostart = true
;autorestart = true
[program:celery]
command=celery worker -B -A server.celery --loglevel=info --concurrency=4
;autostart = true
;autorestart = true
stout_logfile = /var/logs/server/ourserver/celery.log
redirect_stderr = true
[program:updater]
command=python -u updater.py
;directory=/home/ubuntu/server/ourserver
;autostart = true
;autorestart = true
stout_logfile = /var/logs/server/ourserver/updater.log
redirect_stderr = true

replace stout_logfile with stdout_logfile

Related

How to "log-maxsize" the uWSGI vassal logger?

i am started uWSGI via supervisord in Emperor Mode to deploy multiple Django Apps in near future. So far i only deployed one App for testing purposes.
I would like to seperate the emperor logs from my vassals. So far the loggers work. Except that i can not apply log-maxsize to my vassals logger - thats also applies to my vassals req-logger.
[uwsgi]
[program:uwsgi]
command=uwsgi
--master
--emperor %(ENV_HOME)s/etc/uwsgi/vassals
--logto %(ENV_HOME)s/logs/uwsgi/emperor/emperor.log
--log-maxsize 20
--enable-threads
--log-master
<...autostart etc...>
[garden_app] - vassal
<...>
; ---------
; # Logger #
; ---------
; diable default req log and set request Log
req-logger = file:%(var_logs)/vassal_garden/garden_request.log
disable-logging = true
log-4xx = true
log-5xx = true
log-maxsize = 20
; set app / error Log
logger = file:%(var_logs)/vassal_garden/garden_app.log
log-maxsize = 20
<...>
As you can see i set the log-maxsize very low to see the effects immediately.
First of all - all logs working separately.
However, while my emperor creates new files (emperor.log.122568) after reaching the log-maxsize my Vassal files still growing above the log-maxsize or in other words nothing happens they don´t create garden_app.log.56513.
So my Question is: How to set log-maxsize for my vassals loggers? Applies log-maxsize only to logto?
I also tried logto or logto2 on my vassal but then my emperor says "unloyal bad behaving vassal" or "Permission denied".
My Solution after looking long into. Now i get separate logs and rotates.
change the options from logger to logto . logger will do the log job, but it does not rotate, for whatever reason. Also do not use file:.
Make sure to make supervisorctl reread and supervisorctl update after changes on your uwsgi.ini
uwsgi.ini
[program:uwsgi]
command=uwsgi
--master
--emperor %(ENV_HOME)s/etc/uwsgi/vassals
--logto %(ENV_HOME)s/logs/uwsgi/emperor/emperor.log
--log-maxsize 350000
--enable-threads
<...autostart etc...>
[garden_app] - vassal
<...>
; ---------
; # Logger #
; ---------
; default req logger and set request Log and - currently disabled
req-logger = file:%(var_logs)/vassal_garden/garden_request.log
disable-logging = true
log-4xx = true
log-5xx = true
log-maxsize = 350000
; set app / error Log - check
logto = %(var_logs)/vassal_garden/garden_app.log
log-maxsize = 350000
log-backupname = %(var_logs)/vassal_garden/garden_app.old.log
<...>

How to setup uWSGI vassal names for better log reference?

INFO:
Framweork: Django 2.X < 3.x ;
Services: supervisord; uWSGI
Host: CentOS Linux 7
Hello, i am currently testing how to deploy multiple django apps with uWSGI on my host. I set everything up based on manuals provided by my Host & uWsgi and it works. However i would like to customized everything a bit further, so that i can understand everything a bit better.
As far as i understand my uWSGI service uwsgi.ini currently works in an emperor mode and provides vassals for my two different app named baron_app.ini and prince_app.ini to handle my different apps.
Question
I noticed that the err.log is a kind of confusing for debugging with multiple apps.
for instance...
announcing my loyalty to the Emperor...
Sat May 2 21:37:58 2020 - [emperor] vassal baron_app.ini is now loyal....
[pid: 26852|app: 0|req: 2/2].....
Question: Is there a way to give my vassals a name so that it will printed in the Log ? Or a way to tell uWSGI to set some kind of process & app log relation (Emperor - Vassals - Worker etc.) in the Log?
For instance i could imagine something like this, could be easier when it comes to find errors.
#baron_app: announcing my loyalty to the Emperor...
#emperor: Sat May 2 21:37:58 2020 - [emperor] vassal baron_app.ini is now loyal....
#prince_app: [pid: 26852|app: 0|req: 2/2].....
i tried something like procname-prefix and vassal_name but it seems not to work - maybe because i don´t know where to put it, in the uwsgi.ini or vassals*.ini?
my current settings...
...< etc < services.d < uwsgi.ini**
[program:uwsgi]
command=uwsgi --master -- %(ENV_HOME)s/uwsgi/apps-enabled
autostart=true
autorestart=true
stderr_logfile = ~/uwsgi/err.log
stdout_logfile = ~/uwsgi/out.log
stopsignal=INT
vacuum = 1
...< uwsgi < apps-enabled < baron_app.ini**
[uwsgi]
base = /home/kiowa/baron_app/baron_app
chdir = /home/kiowa/baron_app/
static_files = /home/kiowa/baron_app/
http = :8080
master = true
wsgi-file = %(base)/wsgi.py
touch-reload = %(wsgi-file)
static-map = /static=%(static_files)/static_storage/production_static
enable-threads = true
single-interpreter = true
app = wsgi
virtualenv = /home/kiowa/.local/env_baron
plugin = python
uid = kiowa
gid = kiowa
...< uwsgi < apps-enabled < baron_app.ini**
[uwsgi]
base = /home/kiowa/prince_app/baron_app
chdir = /home/kiowa/prince_app/
static_files = /home/kiowa/prince_app/
http = :8000
master = true
wsgi-file = %(base)/wsgi.py
touch-reload = %(wsgi-file)
static-map = /static=%(static_files)/static_storage/production_static
enable-threads = true
single-interpreter = true
app = wsgi
virtualenv = /home/kiowa/.local/prince_app
plugin = python
uid = kiowa
gid = kiowa
Ok i was able to seperate my vassals log files by putting this into my vassals.ini
; set app / error Log - check
logger = file:%(var_logs)/vassal_baron/baron_app.log
; disable default req log and set request Log and - check
req-logger = file:%(var_logs)/vassal_baron/baron_request.log
disable-logging = true

Superset in production

I've been trying to work out how best to productionise superset, or at least getting it running in a daemon. I created a SystemD service with the following:
[Unit]
Description=Superset
[Service]
Type=simple
WorkingDirectory=/home/XXXX/Documents/superset/venv
ExecStart=/home/XXXX/Documents/superset/venv/bin/superset runserver
[Install]
WantedBy=multi-user.target
And the last error I got to was gunicorn cannot be found. I don't know what else I am missing or is there another way to set it up?
I was able to set it up, after a bunch of searching and trial and error with supervisor, which is a python 2 program, but you can run any command (including other python version in other virtual environments).
I'm running it on an ubuntu 16 VPS. After creating an environment and installing supervisor, you create a configuration file and mine looks like this:
[supervisord]
logfile = %(ENV_HOME)s/sdaprod/supervisor/supervisord.log
logfile_maxbytes = 50MB
logfile_backups=10
loglevel = info
pidfile = %(ENV_HOME)s/sdaprod/supervisor/supervisord.pid
nodaemon = false
minfds = 1024
minprocs = 200
umask = 022
identifier = supervisor
directory = %(ENV_HOME)s/sdaprod/supervisor
nocleanup = true
childlogdir = %(ENV_HOME)s/sdaprod/supervisor
strip_ansi = false
[unix_http_server]
file=/tmp/supervisor.sock
chmod = 0777
[supervisorctl]
serverurl=unix:///tmp/supervisor.sock
[rpcinterface:supervisor]
supervisor.rpcinterface_factory = supervisor.rpcinterface:make_main_rpcinterface
[program:superset]
command = %(ENV_HOME)s/miniconda3/envs/superset/bin/superset runserver
directory = %(ENV_HOME)s/sdaprod
environment = PATH='%(ENV_PATH)s:%(ENV_HOME)s/miniconda3/envs/superset/bin',PYTHONPATH='%(ENV_PYTHONPATH)s:%(ENV_HOME)s/sdacore:%(ENV_HOME)s/sdaprod'
And then you just run supervisord from an environment that has it installed
The %(ENV_<>)s are environment variables. This is my first time doing this, so I absolutely can not vouch for this approach's efficiency, but it does work.

start: Job failed to start UWSGI,

Trying to configure ubuntu+nginx+uwsgi+django
upstart script /etc/init/uwsgi.conf
description "uWSGI application server in Emperor mode"
start on runlevel [2345]
stop on runlevel [!2345]
setuid voxa
setgid www-data
exec /usr/local/bin/uwsgi --emperor /etc/uwsgi/sites
uwsgi configuration
[uwsgi]
project = project
base = /home/user
chdir = %(base)/%(project)
home = home/user/Env/project_env
module = %(project).wsgi:application
master = true
processes = 5
socket = %(base)/%(project)/%(project).sock
chmod-socket = 664
vacuum = true
socket = %(base)/%(project)/%(project).sock
chmod-socket = 664
vacuum = true
But after running the command sudo service uwsgi start I get an error
start: Job failed to start
What should I check to handle it?
UPD:
with virtualenv enabled app successfully runs with uwsgi command
uwsgi --http :8000 --module project.wsgi
uWSGI doesn't have permissions to create socket file in specified directory. To solve that, you can run emperor as root and drop privileges in vassal, after creating socket. Simply add to your vassal config:
username = voxa
gropuname = www-data
And remove setuid and setgid from your upstart config file.
If you're worried that someone will abuse that and use other user/group, you can use emperor tyrant mode, by adding --emperor-tyrant to uwsgi start line in upstart config. That will disallow change of username and groupname to other than owner of vassal config file.

How run a daemon of djcelery with celerycam

I'm working with django celery,I have a deamon of celery with supervisor but I have a problem in the django admin I can't see the state of the task,
I can only see the state of my tasks in the django admin, when I typed in console python manage.py celerycam,
How I run a daemon of celerycam.
You can start your celerycam daemon with your app and celery all together with supervisorctl.
Example config file (/etc/supervisor/conf.d/app_name.conf):
# app config
[program:app_name]
user = www-data
directory = /var/www/app_name
command = /var/www/app_name/bin/python /var/www/app_name/bin/gunicorn agora.wsgi_server:application --bind 127.0.0.1:8022 -t 90 --workers 4 --settings='app_name.settings.production'
redirect_stderr = true
autorestart=true
stdout_logfile = /var/log/supervisor/app_name.log
stderr_logfile = /var/log/supervisor/app_name_err.log
stdout_logfile_maxbytes=50MB
stdout_logfile_backups=50
stdout_capture_maxbytes=1MB
stdout_events_enabled=false
loglevel=warn
autostart = true
stopsignal=KILL
environment=LANG="en_US.UTF-8",LC_ALL="en_US.UTF-8",LC_LANG="en_US.UTF-8"
stopasgroup=true
killasgroup=true
# celerycam config
[program:app_name_celerycam]
user = www-data
directory = /var/www/app_name
command = /var/www/app_name/bin/python manage.py celerycam --settings='app_name.settings.production'
redirect_stderr = true
autorestart=true
stdout_logfile = /var/log/supervisor/app_name_celerycam.log
stderr_logfile = /var/log/supervisor/app_name_celerycam_err.log
stdout_logfile_maxbytes=50MB
stdout_logfile_backups=50
stdout_capture_maxbytes=1MB
stdout_events_enabled=false
loglevel=warn
autostart = true
stopwaitsecs=5
# celery config
[program:app_name_celery]
user = www-data
directory = /var/www/app_name
command = /var/www/app_name/bin/python manage.py celeryd -l INFO -E -B --settings='app_name.settings.production' --concurrency=1 --pidfile=/var/run/celery/app_name_celery.pid
redirect_stderr = true
autorestart=true
stdout_logfile = /var/log/supervisor/app_name_celery.log
stderr_logfile = /var/log/supervisor/app_name_celery_err.log
stdout_logfile_maxbytes=50MB
stdout_logfile_backups=50
stdout_capture_maxbytes=1MB
stdout_events_enabled=false
loglevel=warn
autostart=true
stopwaitsecs=5
environment=C_FORCE_ROOT=1
stopasgroup=true
killasgroup=true
# group of our daemons
[group:app_name]
programs=app_name,app_name_celerycam,app_name_celery
priority=999
Reload our configuration:
supervisorctl reread
Now we can manage all daemons of our application with simple commands:
supervisorctl start app_name:*
supervisorctl stop app_name:*
supervisorctl restart app_name:*
supervisorctl status app_name:*