I'm currently using Superset 0.28.1
Session should expire after inactivity.
1) Tried by setting SUPERSET_WEBSERVER_TIMEOUT, SUPERSET_TIMEOUT parameter in config.py.
2) Tried with below commands while starting the server ::
superset runserver -t 120
superset --timeout 60
gunicorn -w 2 --timeout 60 -b 0.0.0.0:9004 --limit-request-line 0 --limit-request-field_size 0 superset:app
Put PERMANENT_SESSION_LIFETIME config to your config file. The value is second.
eg.
PERMANENT_SESSION_LIFETIME = 60 means expire after inactivity 1 minute
Related
I have a minimal flask API deployed on digital ocean which throws this error when I do a large request:
[1] [CRITICAL] WORKER TIMEOUT (pid:16)
I tried:
/env/Lib/site-packages/gunicorn-config.py (0 is unlimited, but I tried 500 etc.)
class Timeout(Setting):
name = "timeout"
section = "Worker Processes"
cli = ["-t", "--timeout"]
meta = "INT"
validator = validate_pos_int
type = int
default = 0
desc = """\
gunicorn_config.py
bind = "0.0.0.0:8080"
workers = 2
timeout = 300
Procfile
web: gunicorn app:app --timeout 300
But I get the same error on the digital ocean console, and on the browser frontend I get the 504 error after about 30 secs which means I'm not overwriting the 30 seconds gunicorn default.
Smaller/faster requests they work good.
Any idea what can be the issue? Thanks so much!
Elastic Beanstalk is adding & removing instances one after the other. Googling around points to checking the "State transition message" which is coming up as "Client.UserInitiatedShutdown: User initiated shutdown" for which https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/troubleshooting-launch.html#troubleshooting-launch-internal states some possible reasons but none of these apply. No one has touched any setting, etc. Any ideas?
UPDATE: Did a bit more digging and found out that app deployment is failing. Relevant log errors are below:
eb-engine.log
2021/08/05 15:46:29.272215 [INFO] Executing instruction: PreBuildEbExtension
2021/08/05 15:46:29.272220 [INFO] Starting executing the config set Infra-EmbeddedPreBuild.
2021/08/05 15:46:29.272235 [INFO] Running command /bin/sh -c /opt/aws/bin/cfn-init -s arn:aws:cloudformation:us-east-1:345470085661:stack/awseb-e-mecfm5qc8z-stack/317924c0-a106-11ea-a8a3-12498e67507f -r AWSEBAutoScalingGroup --region us-east-1 --configsets Infra-EmbeddedPreBuild
2021/08/05 15:50:44.538818 [ERROR] An error occurred during execution of command [app-deploy] - [PreBuildEbExtension]. Stop running the command. Error: EbExtension build failed. Please refer to /var/log/cfn-init.log for more details.
2021/08/05 15:50:44.540438 [INFO] Executing cleanup logic
2021/08/05 15:50:44.581445 [INFO] CommandService Response: {"status":"FAILURE","api_version":"1.0","results":[{"status":"FAILURE","msg":"Engine execution has encountered an error.","returncode":1,"events":[{"msg":"Instance deployment failed. For details, see 'eb-engine.log'.","timestamp":1628178644,"severity":"ERROR"}]}]}
2021/08/05 15:50:44.620394 [INFO] Platform Engine finished execution on command: app-deploy
2021/08/05 15:51:22.196186 [ERROR] An error occurred during execution of command [self-startup] - [PreBuildEbExtension]. Stop running the command. Error: EbExtension build failed. Please refer to /var/log/cfn-init.log for more details.
2021/08/05 15:51:22.196215 [INFO] Executing cleanup logic
eb-cfn-init.log
[2021-08-05T15:42:44.199Z] Completed executing cfn_init.
[2021-08-05T15:42:44.226Z] finished _OnInstanceReboot
+ RESULT=1
+ [[ 1 -ne 0 ]]
+ sleep_delay
+ (( 2 < 3600 ))
+ echo Sleeping 2
Sleeping 2
+ sleep 2
+ SLEEP_TIME=4
+ true
+ curl https://elasticbeanstalk-platform-assets-us-east-1.s3.amazonaws.com/stalks/eb_php74_amazon_linux_2_1.0.1153.0_20210728213922/lib/UserDataScript.sh
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0
100 4627 100 4627 0 0 24098 0 --:--:-- --:--:-- --:--:-- 24098
+ RESULT=0
+ [[ 0 -ne 0 ]]
+ SLEEP_TIME=2
+ /bin/bash /tmp/ebbootstrap.sh 'https://cloudformation-waitcondition-us-east-1.s3.amazonaws.com/arn%3Aaws%3Acloudformation%3Aus-east-1%3A345470085661%3Astack/awseb-e-mecfm5qc8z-stack/317924c0-a106-11ea-a8a3-12498e67507f/AWSEBInstanceLaunchWaitHandle?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Date=20200528T171102Z&X-Amz-SignedHeaders=host&X-Amz-Expires=86399&X-Amz-Credential=AKIAIIT3CWAIMJYUTISA%2F20200528%2Fus-east-1%2Fs3%2Faws4_request&X-Amz-Signature=57c7da0aec730af1b425d1aff68517c333cf9d5432c984d775419b415cac8513' arn:aws:cloudformation:us-east-1:345470085661:stack/awseb-e-mecfm5qc8z-stack/317924c0-a106-11ea-a8a3-12498e67507f 65c52bb7-0376-4d43-b304-b64890a34c1c https://elasticbeanstalk-health.us-east-1.amazonaws.com '' https://elasticbeanstalk-platform-assets-us-east-1.s3.amazonaws.com/stalks/eb_php74_amazon_linux_2_1.0.1153.0_20210728213922 us-east-1
[2021-08-05T15:46:07.683Z] Started EB Bootstrapping Script.
[2021-08-05T15:46:07.739Z] Received parameters:
TARBALLS =
EB_GEMS =
SIGNAL_URL = https://cloudformation-waitcondition-us-east-1.s3.amazonaws.com/arn%3Aaws%3Acloudformation%3Aus-east-1%3A345470085661%3Astack/awseb-e-mecfm5qc8z-stack/317924c0-a106-11ea-a8a3-12498e67507f/AWSEBInstanceLaunchWaitHandle?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Date=20200528T171102Z&X-Amz-SignedHeaders=host&X-Amz-Expires=86399&X-Amz-Credential=AKIAIIT3CWAIMJYUTISA%2F20200528%2Fus-east-1%2Fs3%2Faws4_request&X-Amz-Signature=57c7da0aec730af1b425d1aff68517c333cf9d5432c984d775419b415cac8513
STACK_ID = arn:aws:cloudformation:us-east-1:345470085661:stack/awseb-e-mecfm5qc8z-stack/317924c0-a106-11ea-a8a3-12498e67507f
REGION = us-east-1
GUID =
HEALTHD_GROUP_ID = 65c52bb7-0376-4d43-b304-b64890a34c1c
HEALTHD_ENDPOINT = https://elasticbeanstalk-health.us-east-1.amazonaws.com
PROXY_SERVER =
HEALTHD_PROXY_LOG_LOCATION =
PLATFORM_ASSETS_URL = https://elasticbeanstalk-platform-assets-us-east-1.s3.amazonaws.com/stalks/eb_php74_amazon_linux_2_1.0.1153.0_20210728213922
Is this some corrupted AMI?
Turned out that there was a config script in the .ebextension directory that was not behaving.
I want to set a cron that fetches some stories from an api every 5 minutes and shoot a mail if any new stories comes up. Here is my crontab file. (Used a django management command to do so). I fired the management command and its sending me the correct info, but when I am trying to set a cronjob for the same, its not firing. Here is my crontab file
vi /etc/crontab
# /etc/crontab: system-wide crontab
# Unlike any other crontab you don't have to run the `crontab'
# command to install the new version when you edit this file
# and files in /etc/cron.d. These files also have username fields,
# that none of the other crontabs do.
SHELL=/bin/sh
PATH=/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin
# m h dom mon dow user command
17 * * * * root cd / && run-parts --report /etc/cron.hourly
25 6 * * * root test -x /usr/sbin/anacron || ( cd / && run-parts --report /etc/cron.daily )
47 6 * * 7 root test -x /usr/sbin/anacron || ( cd / && run-parts --report /etc/cron.weekly )
52 6 1 * * root test -x /usr/sbin/anacron || ( cd / && run-parts --report /etc/cron.monthly )
5 * * * * root /home/anurag/projects/virtualenvs/django6/bin/python /home/anurag/Documents/HNStories/hnstories/manage.py get_stories >> /home/anurag/cron_log.txt
Here are its permission
ls -l /etc/crontab
-rw-r--r-- 1 root root 884 Aug 17 20:20 /etc/crontab
Also I am not able to see any warning and error in syslog file
cat /var/log/syslog | grep crontab
Aug 17 12:58:01 anurag cron[1257]: (*system*) RELOAD (/etc/crontab)
Aug 17 17:24:01 anurag cron[8534]: (*system*) RELOAD (/etc/crontab)
Aug 17 20:21:01 anurag cron[1139]: (*system*) RELOAD (/etc/crontab)
I also tried to restart the crontab and restart my computer. But I am not able to fix this issue.
The correct syntax for executing every 5 minutes would be
*/5 * * * * root /home/anurag/projects/virtualenvs/django6/bin/python /home/anurag/Documents/HNStories/hnstories/manage.py get_stories >> /home/anurag/cron_log.txt
Another reason why the command isn't executing might be a missing newline at the end of /etc/crontab
EDIT:
You might also want to look into django-extensions which provides a command extension (runjobs) to run scheduled jobs.
I use Django 1.5.3 with gunicorn 18.0 and lighttpd. I serve my static and dynamic content like that using lighttpd:
$HTTP["host"] == "www.mydomain.com" {
$HTTP["url"] !~ "^/media/|^/static/|^/apple-touch-icon(.*)$|^/favicon(.*)$|^/robots\.txt$" {
proxy.balance = "hash"
proxy.server = ( "" => ("myserver" =>
( "host" => "127.0.0.1", "port" => 8013 )
))
}
$HTTP["url"] =~ "^/media|^/static|^/apple-touch-icon(.*)$|^/favicon(.*)$|^/robots\.txt$" {
alias.url = (
"/media/admin/" => "/var/www/virtualenvs/mydomain/lib/python2.7/site-packages/django/contrib/admin/static/admin/",
"/media" => "/var/www/mydomain/mydomain/media",
"/static" => "/var/www/mydomain/mydomain/static"
)
}
url.rewrite-once = (
"^/apple-touch-icon(.*)$" => "/media/img/apple-touch-icon$1",
"^/favicon(.*)$" => "/media/img/favicon$1",
"^/robots\.txt$" => "/media/robots.txt"
)
}
I already tried to run gunicorn (via supervisord) in many different ways, but I cant get it better optimized than it can handle about 1100 concurrent connections. In my project I need about 10000-15000 connections
command = /var/www/virtualenvs/myproject/bin/python /var/www/myproject/manage.py run_gunicorn -b 127.0.0.1:8013 -w 9 -k gevent --preload --settings=myproject.settings
command = /var/www/virtualenvs/myproject/bin/python /var/www/myproject/manage.py run_gunicorn -b 127.0.0.1:8013 -w 10 -k eventlet --worker_connections=1000 --settings=myproject.settings --max-requests=10000
command = /var/www/virtualenvs/myproject/bin/python /var/www/myproject/manage.py run_gunicorn -b 127.0.0.1:8013 -w 20 -k gevent --settings=myproject.settings --max-requests=1000
command = /var/www/virtualenvs/myproject/bin/python /var/www/myproject/manage.py run_gunicorn -b 127.0.0.1:8013 -w 40 --settings=myproject.settings
On the same server there live about 10 other projects, but CPU and RAM is fine, so this shouldnt be a problem, right?
I ran a load test and these are the results:
At about 1100 connections my lighttpd errorlog says something like that, thats where the load test shows the drop of connections:
2013-10-31 14:06:51: (mod_proxy.c.853) write failed: Connection timed out 110
2013-10-31 14:06:51: (mod_proxy.c.939) proxy-server disabled: 127.0.0.1 8013 83
2013-10-31 14:06:51: (mod_proxy.c.1316) no proxy-handler found for: /
... after about one minute
2013-10-31 14:07:02: (mod_proxy.c.1361) proxy - re-enabled: 127.0.0.1 8013
These things also appear ever now and then:
2013-10-31 14:06:55: (network_linux_sendfile.c.94) writev failed: Connection timed out 600
2013-10-31 14:06:55: (mod_proxy.c.853) write failed: Connection timed out 110
...
2013-10-31 14:06:57: (mod_proxy.c.828) establishing connection failed: Connection timed out
2013-10-31 14:06:57: (mod_proxy.c.939) proxy-server disabled: 127.0.0.1 8013 45
So how can I tune gunicorn/lighttpd to serve more connections faster? What can I optimize? Do you know any other/better setup?
Thanks alot in advance for your help!
Update: Some more server info
root#django ~ # top
top - 15:28:38 up 100 days, 9:56, 1 user, load average: 0.11, 0.37, 0.76
Tasks: 352 total, 1 running, 351 sleeping, 0 stopped, 0 zombie
Cpu(s): 33.0%us, 1.6%sy, 0.0%ni, 64.2%id, 0.4%wa, 0.0%hi, 0.7%si, 0.0%st
Mem: 32926156k total, 17815984k used, 15110172k free, 342096k buffers
Swap: 23067560k total, 0k used, 23067560k free, 4868036k cached
root#django ~ # iostat
Linux 2.6.32-5-amd64 (django.myserver.com) 10/31/2013 _x86_64_ (4 CPU)
avg-cpu: %user %nice %system %iowait %steal %idle
33.00 0.00 2.36 0.40 0.00 64.24
Device: tps kB_read/s kB_wrtn/s kB_read kB_wrtn
sda 137.76 980.27 2109.21 119567783 257268738
sdb 24.23 983.53 2112.25 119965731 257639874
sdc 24.25 985.79 2110.14 120241256 257382998
md0 0.00 0.00 0.00 400 0
md1 0.00 0.00 0.00 284 6
md2 1051.93 38.93 4203.96 4748629 512773952
root#django ~ # netstat -an |grep :80 |wc -l
7129
Kernel Settings:
echo "10152 65535" > /proc/sys/net/ipv4/ip_local_port_range
sysctl -w fs.file-max=128000
sysctl -w net.ipv4.tcp_keepalive_time=300
sysctl -w net.core.somaxconn=250000
sysctl -w net.ipv4.tcp_max_syn_backlog=2500
sysctl -w net.core.netdev_max_backlog=2500
ulimit -n 10240
I am using Django to handle fairly long http post requests and I am wondering if my setup has some limitations when I received many requests at the same time.
lighttpd.conf fcgi:
fastcgi.server = (
"a.fcgi" => (
"main" => (
# Use host / port instead of socket for TCP fastcgi
"host" => "127.0.0.1",
"port" => 3033,
"check-local" => "disable",
"allow-x-send-file" => "enable"
))
)
Django init.d script start section:
start-stop-daemon --start --quiet \
--pidfile /var/www/tmp/a.pid \
--chuid www-data --exec /usr/bin/env -- python \
/var/www/a/manage.py runfcgi \
host=127.0.0.1 port=3033 pidfile=/var/www/tmp/a.pid
Starting Django using the script above results in a multi-threaded Django server:
www-data 342 7873 0 04:58 ? 00:01:04 python /var/www/a/manage.py runfcgi host=127.0.0.1 port=3033 pidfile=/var/www/tmp/a.pid
www-data 343 7873 0 04:58 ? 00:01:15 python /var/www/a/manage.py runfcgi host=127.0.0.1 port=3033 pidfile=/var/www/tmp/a.pid
www-data 378 7873 0 Feb14 ? 00:04:45 python /var/www/a/manage.py runfcgi host=127.0.0.1 port=3033 pidfile=/var/www/tmp/a.pid
www-data 382 7873 0 Feb12 ? 00:14:53 python /var/www/a/manage.py runfcgi host=127.0.0.1 port=3033 pidfile=/var/www/tmp/a.pid
www-data 386 7873 0 Feb12 ? 00:12:49 python /var/www/a/manage.py runfcgi host=127.0.0.1 port=3033 pidfile=/var/www/tmp/a.pid
www-data 7873 1 0 Feb12 ? 00:00:24 python /var/www/a/manage.py runfcgi host=127.0.0.1 port=3033 pidfile=/var/www/tmp/a.pid
In lighttpd error.log, I do see load = 10 which shows I am getting many requests at the same time, this happens few times a day:
2010-02-16 05:17:17: (mod_fastcgi.c.2979) got proc: pid: 0 socket: tcp:127.0.0.1:3033 load: 10
Is my setup correct to handle many long http post requests (can last few minutes each) at the same time ?
I think you may want to configure your fastcgi worker to run multi-processed, or multi-threaded.
From manage.py runfcgi help:
method=IMPL prefork or threaded (default prefork)
[...]
maxspare=NUMBER max number of spare processes / threads
minspare=NUMBER min number of spare processes / threads.
maxchildren=NUMBER hard limit number of processes / threads
So your start command would be:
start-stop-daemon --start --quiet \
--pidfile /var/www/tmp/a.pid \
--chuid www-data --exec /usr/bin/env -- python \
/var/www/a/manage.py runfcgi \
host=127.0.0.1 port=3033 pidfile=/var/www/tmp/a.pid \
method=prefork maxspare=4 minspare=4 maxchildren=8
You will want to adjust the number of processes as needed. Note that the more FCGI processes you have, your memory usage will increase linearly. Also, if your processes are CPU-bound, having more processes than number of available CPU cores won't help much for concurrency.