uwsgi does not show 2xx and 3xx responses in the log - django

For testing purposes I need to see all the requests that come to my uwsgi application (and Django behind it), but I only see 4xx and 5xx, here is my uwsgi.ini config:
[uwsgi]
http-socket = :8080
;chdir = /code/
module = app.wsgi:application
master = true
processes = 2
logto = ./uwsgi.log
logdate = %%d/%%m/%%Y %%H:%%M:%%S
vacuum = true
buffer-size = 65535
stats = 0.0.0.0:1717
stats-http = true
max-requests = 5000
memory-report = true
;touch-reload = /code/config/touch_for_uwsgi_reload
pidfile = /tmp/project-master.pid
enable-threads = true
single-interpreter = true
log-format = [%(ctime)] [%(proto) %(status)] %(method) %(host)%(uri) => %(rsize) bytes in %(msecs) msecs, referer - "%(referer)", user agent - "%(uagent)"
disable-logging = true ; Disable built-in logging
log-4xx = true ; but log 4xx's anyway
log-5xx = true ; and 5xx's
log-3xx = true
log-2xx = true
ignore-sigpipe = true
ignore-write-errors = true
disable-write-exception = true
;chown-socket=www-data:www-data
Django itself produces 2xx logs perfectly in the same env (uwsgi logs are written to ./uwsgi.log file so not visible here)

Related

when set up reload-on-rss in uwsgi server , how to avoid all processes reload/respawn at the same time?

I'm encoutering a situation that using reload-on-rss/reload-on-as = 256 to avoid memor leak, it works for almost all scenes.
however, by setting up these params, there's a extremely small probability that all processes are running out of memory which been set. so they'll all respawn at the same time, and I found out rest requests responded in 502.
So I wanna ask if there is a way to keep at least one worker running to process requests even its memory usage's exceeded (it should not reload untill another worker started)? I've tried searching for the settings to make it but cannot found anything. does anyone can help? thank you so much! (and I'm not sure whether I make myself clear. sorry about that)
here is my uwsgi.ini:
http = :28888
touch-reload = true
reload-on-as = 256
reload-on-rss = 256
procname-prefix-spaced=service_name
module = service_name.wsgi:application
chdir = ./
pidfile = uwsgi.pid
socket = uwsgi.sock
master = true
vacuum = true
thunder-lock = true
enable-threads = true
harakiri = 600
processes = 20
threads = 10
py-autoreload = 1
chmod-socket = 664
post-buffering = 10240
socket-timeout = 3600
http-timeout = 3600
uwsgi_read_timeout = 3600
listen = 10000

How to setup uWSGI vassal names for better log reference?

INFO:
Framweork: Django 2.X < 3.x ;
Services: supervisord; uWSGI
Host: CentOS Linux 7
Hello, i am currently testing how to deploy multiple django apps with uWSGI on my host. I set everything up based on manuals provided by my Host & uWsgi and it works. However i would like to customized everything a bit further, so that i can understand everything a bit better.
As far as i understand my uWSGI service uwsgi.ini currently works in an emperor mode and provides vassals for my two different app named baron_app.ini and prince_app.ini to handle my different apps.
Question
I noticed that the err.log is a kind of confusing for debugging with multiple apps.
for instance...
announcing my loyalty to the Emperor...
Sat May 2 21:37:58 2020 - [emperor] vassal baron_app.ini is now loyal....
[pid: 26852|app: 0|req: 2/2].....
Question: Is there a way to give my vassals a name so that it will printed in the Log ? Or a way to tell uWSGI to set some kind of process & app log relation (Emperor - Vassals - Worker etc.) in the Log?
For instance i could imagine something like this, could be easier when it comes to find errors.
#baron_app: announcing my loyalty to the Emperor...
#emperor: Sat May 2 21:37:58 2020 - [emperor] vassal baron_app.ini is now loyal....
#prince_app: [pid: 26852|app: 0|req: 2/2].....
i tried something like procname-prefix and vassal_name but it seems not to work - maybe because i donĀ“t know where to put it, in the uwsgi.ini or vassals*.ini?
my current settings...
...< etc < services.d < uwsgi.ini**
[program:uwsgi]
command=uwsgi --master -- %(ENV_HOME)s/uwsgi/apps-enabled
autostart=true
autorestart=true
stderr_logfile = ~/uwsgi/err.log
stdout_logfile = ~/uwsgi/out.log
stopsignal=INT
vacuum = 1
...< uwsgi < apps-enabled < baron_app.ini**
[uwsgi]
base = /home/kiowa/baron_app/baron_app
chdir = /home/kiowa/baron_app/
static_files = /home/kiowa/baron_app/
http = :8080
master = true
wsgi-file = %(base)/wsgi.py
touch-reload = %(wsgi-file)
static-map = /static=%(static_files)/static_storage/production_static
enable-threads = true
single-interpreter = true
app = wsgi
virtualenv = /home/kiowa/.local/env_baron
plugin = python
uid = kiowa
gid = kiowa
...< uwsgi < apps-enabled < baron_app.ini**
[uwsgi]
base = /home/kiowa/prince_app/baron_app
chdir = /home/kiowa/prince_app/
static_files = /home/kiowa/prince_app/
http = :8000
master = true
wsgi-file = %(base)/wsgi.py
touch-reload = %(wsgi-file)
static-map = /static=%(static_files)/static_storage/production_static
enable-threads = true
single-interpreter = true
app = wsgi
virtualenv = /home/kiowa/.local/prince_app
plugin = python
uid = kiowa
gid = kiowa
Ok i was able to seperate my vassals log files by putting this into my vassals.ini
; set app / error Log - check
logger = file:%(var_logs)/vassal_baron/baron_app.log
; disable default req log and set request Log and - check
req-logger = file:%(var_logs)/vassal_baron/baron_request.log
disable-logging = true

log not generated in Superset 0.24

I have done following log configuration in superset_config.py file -
LOG_FORMAT = '%(asctime)s:%(levelname)s:%(name)s:%(message)s'
LOG_LEVEL = 'DEBUG'
ENABLE_TIME_ROTATE = False
TIME_ROTATE_LOG_LEVEL = 'DEBUG'
FILENAME = os.path.join(DATA_DIR, 'log', 'superset.log')
ROLLOVER = 'midnight'
INTERVAL = 1
BACKUP_COUNT = 30
But logs are not generated in my DATA_DIR/log/superset.log file, is there any configuration missing?
Change ENABLE_TIME_ROTATE = False to ENABLE_TIME_ROTATE = True

s3cmd obfuscate file names (change to random value) on Amazon S3 side (local original file name)

my .s3cfg with GPG encryption passphrase and other security settings. Would you recommend other security hardening?
[default]
access_key = $USERNAME
access_token =
add_encoding_exts =
add_headers =
bucket_location = eu-central-1
ca_certs_file =
cache_file =
check_ssl_certificate = True
check_ssl_hostname = True
cloudfront_host = cloudfront.amazonaws.com
default_mime_type = binary/octet-stream
delay_updates = False
delete_after = False
delete_after_fetch = False
delete_removed = False
dry_run = False
enable_multipart = True
encoding = UTF-8
encrypt = False
expiry_date =
expiry_days =
expiry_prefix =
follow_symlinks = False
force = False
get_continue = False
gpg_command = /usr/local/bin/gpg
gpg_decrypt = %(gpg_command)s -d --verbose --no-use-agent --batch --yes --passphrase-fd %(passphrase_fd)s -o %(output_file)s %(input_file)s
gpg_encrypt = %(gpg_command)s -c --verbose --no-use-agent --batch --yes --passphrase-fd %(passphrase_fd)s -o %(output_file)s %(input_file)s
gpg_passphrase = $PASSPHRASE
guess_mime_type = True
host_base = s3.amazonaws.com
host_bucket = %(bucket)s.s3.amazonaws.com
human_readable_sizes = False
invalidate_default_index_on_cf = False
invalidate_default_index_root_on_cf = True
invalidate_on_cf = False
kms_key =
limitrate = 0
list_md5 = False
log_target_prefix =
long_listing = False
max_delete = -1
mime_type =
multipart_chunk_size_mb = 15
multipart_max_chunks = 10000
preserve_attrs = True
progress_meter = True
proxy_host =
proxy_port = 0
put_continue = False
recursive = False
recv_chunk = 65536
reduced_redundancy = False
requester_pays = False
restore_days = 1
secret_key = $PASSWORD
send_chunk = 65536
server_side_encryption = False
signature_v2 = False
simpledb_host = sdb.amazonaws.com
skip_existing = False
socket_timeout = 300
stats = False
stop_on_error = False
storage_class =
urlencoding_mode = normal
use_https = True
use_mime_magic = True
verbosity = WARNING
website_endpoint = http://%(bucket)s.s3-website-%(location)s.amazonaws.com/
website_error =
website_index = index.html
I use this command to upload/sync my local folder to Amazon S3.
s3cmd -e -v put --recursive --dry-run /Users/$USERNAME/Downloads/ s3://dgtrtrtgth777
INFO: Compiling list of local files...
INFO: Running stat() and reading/calculating MD5 values on 15957 files, this may take some time...
INFO: [1000/15957]
INFO: [2000/15957]
INFO: [3000/15957]
INFO: [4000/15957]
INFO: [5000/15957]
INFO: [6000/15957]
INFO: [7000/15957]
INFO: [8000/15957]
INFO: [9000/15957]
INFO: [10000/15957]
INFO: [11000/15957]
INFO: [12000/15957]
INFO: [13000/15957]
INFO: [14000/15957]
INFO: [15000/15957]
I tested the encryption with Transmit GUI S3 Client and didn't get plain text files.
But I see the original filename. I wish to change the filename to a random value, but have local the original filename (mapping?). How can I do this?
What are downsides doing so if I need to restore the files? I use Amazon S3 only as a backup, in addition to my TimeMachine backup.
If you use "random" names, then it isn't sync.
If your only record on the filenames/mapping is local, it will be impossible to restore your backup in case of a local failure.
If you don't need all versions of your files I'd suggest putting everything in a (possibly encrypted) compressed tarball before uploading it.
Otherwise, you will have to write a small script that lists all files and individually does an s3cmd put specifying a random destination, where the mapping is appended to a log file, which should be the first thing you s3cmd put to your server. I don't recommend this for something as crucial as storing your backups.
A skeleton showing how this could work:
# Save all files in backupX.sh where X is the version number
find /Users/$USERNAME/Downloads/ | awk '{print "s3cmd -e -v put "$0" s3://dgtrshitcrapola/"rand()*1000000}' > backupX.sh
# Upload the mapping file
s3cmd -e -v put backupX.sh s3://dgtrshitcrapola/
# Upload the actual files
sh backupX.sh
# Add cleanup code here
However, you will need to handle filename collisions, failed uploads, versioning clashes, ... why not use an existing tool that backs up to S3?

Supervisord with django writing separate logs for each program

I'm using supervisord (through django-supervisor a thin wrapper around supervisor) to run multiple processes with my Django installation.
My problem is all the logs are written to the supervisord log file (in this example out.log) instead of the different log files.
the conf file (cleaned up):
[supervisord]
logfile=/var/log/server/ourserver/out.log
[rpcinterface:supervisor]
supervisor.rpcinterface_factory = supervisor.rpcinterface:make_main_rpcinterface
[supervisorctl]
serverurl=unix:///var/run/supervisor.sock ; use a unix:// URL for a unix socket
[program:webserver]
command=uwsgi uwsgi.ini
stout_logfile = /var/log/server/ourserver/django.log
redirect_stderr = true
;autostart = true
;autorestart = true
[program:celery]
command=celery worker -B -A server.celery --loglevel=info --concurrency=4
;autostart = true
;autorestart = true
stout_logfile = /var/logs/server/ourserver/celery.log
redirect_stderr = true
[program:updater]
command=python -u updater.py
;directory=/home/ubuntu/server/ourserver
;autostart = true
;autorestart = true
stout_logfile = /var/logs/server/ourserver/updater.log
redirect_stderr = true
replace stout_logfile with stdout_logfile