I have a minimal flask API deployed on digital ocean which throws this error when I do a large request:
[1] [CRITICAL] WORKER TIMEOUT (pid:16)
I tried:
/env/Lib/site-packages/gunicorn-config.py (0 is unlimited, but I tried 500 etc.)
class Timeout(Setting):
name = "timeout"
section = "Worker Processes"
cli = ["-t", "--timeout"]
meta = "INT"
validator = validate_pos_int
type = int
default = 0
desc = """\
gunicorn_config.py
bind = "0.0.0.0:8080"
workers = 2
timeout = 300
Procfile
web: gunicorn app:app --timeout 300
But I get the same error on the digital ocean console, and on the browser frontend I get the 504 error after about 30 secs which means I'm not overwriting the 30 seconds gunicorn default.
Smaller/faster requests they work good.
Any idea what can be the issue? Thanks so much!
Related
i am started uWSGI via supervisord in Emperor Mode to deploy multiple Django Apps in near future. So far i only deployed one App for testing purposes.
I would like to seperate the emperor logs from my vassals. So far the loggers work. Except that i can not apply log-maxsize to my vassals logger - thats also applies to my vassals req-logger.
[uwsgi]
[program:uwsgi]
command=uwsgi
--master
--emperor %(ENV_HOME)s/etc/uwsgi/vassals
--logto %(ENV_HOME)s/logs/uwsgi/emperor/emperor.log
--log-maxsize 20
--enable-threads
--log-master
<...autostart etc...>
[garden_app] - vassal
<...>
; ---------
; # Logger #
; ---------
; diable default req log and set request Log
req-logger = file:%(var_logs)/vassal_garden/garden_request.log
disable-logging = true
log-4xx = true
log-5xx = true
log-maxsize = 20
; set app / error Log
logger = file:%(var_logs)/vassal_garden/garden_app.log
log-maxsize = 20
<...>
As you can see i set the log-maxsize very low to see the effects immediately.
First of all - all logs working separately.
However, while my emperor creates new files (emperor.log.122568) after reaching the log-maxsize my Vassal files still growing above the log-maxsize or in other words nothing happens they donĀ“t create garden_app.log.56513.
So my Question is: How to set log-maxsize for my vassals loggers? Applies log-maxsize only to logto?
I also tried logto or logto2 on my vassal but then my emperor says "unloyal bad behaving vassal" or "Permission denied".
My Solution after looking long into. Now i get separate logs and rotates.
change the options from logger to logto . logger will do the log job, but it does not rotate, for whatever reason. Also do not use file:.
Make sure to make supervisorctl reread and supervisorctl update after changes on your uwsgi.ini
uwsgi.ini
[program:uwsgi]
command=uwsgi
--master
--emperor %(ENV_HOME)s/etc/uwsgi/vassals
--logto %(ENV_HOME)s/logs/uwsgi/emperor/emperor.log
--log-maxsize 350000
--enable-threads
<...autostart etc...>
[garden_app] - vassal
<...>
; ---------
; # Logger #
; ---------
; default req logger and set request Log and - currently disabled
req-logger = file:%(var_logs)/vassal_garden/garden_request.log
disable-logging = true
log-4xx = true
log-5xx = true
log-maxsize = 350000
; set app / error Log - check
logto = %(var_logs)/vassal_garden/garden_app.log
log-maxsize = 350000
log-backupname = %(var_logs)/vassal_garden/garden_app.old.log
<...>
I'm currently using Superset 0.28.1
Session should expire after inactivity.
1) Tried by setting SUPERSET_WEBSERVER_TIMEOUT, SUPERSET_TIMEOUT parameter in config.py.
2) Tried with below commands while starting the server ::
superset runserver -t 120
superset --timeout 60
gunicorn -w 2 --timeout 60 -b 0.0.0.0:9004 --limit-request-line 0 --limit-request-field_size 0 superset:app
Put PERMANENT_SESSION_LIFETIME config to your config file. The value is second.
eg.
PERMANENT_SESSION_LIFETIME = 60 means expire after inactivity 1 minute
I'm getting a little issue with uWSGI and my Django application on production server. I have a FreeBSD jail which has only one Django application. When I made code improvements, I do a touch on settings file in order to take into account modifications.
However, touch kills my uWSGI service each time. So I need to start uWSGI manually else I get a 502 Bad Gateway issue with my browser.
Environment:
Django version : 1.11.20
uWSGI version : 2.0.15
Python version : 3.6.2
uWSGI.ini file:
This is my uwsgi.ini file :
[uwsgi]
pythonpath=/usr/local/www/app/src/web
virtualenv = /usr/local/www/app/venv
module=main.wsgi:application
env = DJANGO_SETTINGS_MODULE=main.settings.prod
env = no_proxy=*.toto.fr
env = LANG=en_US.UTF-8
master=true
processes=2
vaccum=true
chmod-socket=660
chown-socket=www:www
socket=/tmp/uwsgi.sock
socket-timeout = 60
post-buffering = 8192
max-requests = 5000
buffer-size = 32768
offload-threads = 1
uid=www
gid=www
logdate=true
log-maxsize = 20000000
manage-script-name=true
touch-reload = /usr/local/www/app/src/web/main/settings/prod.py
Issue:
When I make a deployment, once it's done, I do :
touch /usr/local/www/app/src/web/main/settings/prod.py
Then I have my uWSGI service out.
This is the last log I have :
Thank you very much !
I'm trying to set up a cronjob at Elastic Beanstalk. The task is being scheduled. For testing purposes it should run every minute... However it is not working. It is a Django app. The app is running in two Environments, one is the worker and the other one is "hosting" the application.
This part is working. The command is running but it's not being executed (the files are not being deleted).
Here is views.py:
#login_required
def delete_expired_files(request):
users = DemoUser.objects.all()
for user in users:
documents = Document.objects.filter(owner=user.id)
if documents:
for doc in documents:
now = timezone.now()
if now >= doc.date_published + timedelta(days = doc.owner.group.valid_time):
doc.delete()
return redirect("user_home")
cron.yml:
version: 1
cron:
- name: "delete_expired_files"
url: "http://networksapp.elasticbeanstalk.com/networks_app/delete_expired_files"
schedule: "* * * * *"
However, it prints this on the log file at the access_log part :
"POST /myapp/management/commands/delete_expired_files HTTP/1.1" 500 124709 "-" "aws-sqsd/2.0"
This is the log file I am accessing so far:
Log file content
Why is it? How can I fix it?
Thank you so much.
I am using emperor mode and noticed a couple of uwsgi worker processes keep using CPU.
Here is the ini config for the particular website
[uwsgi]
socket = /tmp/%n.sock
master = true
processes = 2
env = DJANGO_SETTINGS_MODULE=abc.settings
module = django.core.handlers.wsgi:WSGIHandler()
pythonpath = /var/www/abc/abc
chdir = /var/www/abc/abc
chmod-socket = 666
uid = www-data
virtualenv = /var/www/abc
vacuum = true
procname-prefix-spaced = %n
plugins = python
enable-threads = true
single-interpreter = true
sharedarea = 4
htop shows:
13658 www-data 20 0 204M 59168 4148 S 3.0 3.5 3h03:50 abc uWSGI worker 1
13659 www-data 20 0 209M 65092 4428 S 1.0 3.8 3h02:02 abc uWSGI worker 2
I have checked nginx and uwsgi log and both not showing the site is be accessed.
The question is:
why the workers keep using around 1-5% of the CPU when the site is not being accessed.
I think I have found the cause of this, in development, I am using the timer to monitor code changes then reload the uwsgi processes, and I think it's because the project is using django-cms and it's kind of big, so constantly monitoring for code changes every second is kind of heavy, after changing the timer to 5 seconds the processes actually gone quiet.