I am hoping to use supervisor to monitor and run a gunicorn server.
When I run:
/usr/bin/gunicorn app.wsgi:application -c config.conf
it works.
But the exact same command in my supervisor conf file does not work. Any explaination?
supervisor.conf
[supervisord]
[group:app]
programs=gunicorn_app
[program:gunicorn_app]
environment=PYTHONPATH=usr/bin
command=/usr/bin/gunicorn app.wsgi:application -c gunicorn.conf.py
directory=~/path/to/app
autostart=true
autorestart=true
environment=LANG="en_US.UTF-8",LC_ALL="en_US.UTF-8",LC_LANG="en_US.UTF-8"
I'm receiving an error like this:
2016-05-31 22:53:34,786 INFO spawned: 'gunicorn_app' with pid 18763
2016-05-31 22:53:34,789 INFO exited: gunicorn_app (exit status 127; not expected)
2016-05-31 22:53:35,791 INFO spawned: 'gunicorn_app' with pid 18764
2016-05-31 22:53:35,795 INFO exited: gunicorn_app (exit status 127; not expected)
2016-05-31 22:53:37,798 INFO spawned: 'gunicorn_app' with pid 18765
2016-05-31 22:53:37,802 INFO exited: gunicorn_app (exit status 127; not expected)
2016-05-31 22:53:40,807 INFO spawned: 'gunicorn_app' with pid 18766
2016-05-31 22:53:40,810 INFO exited: gunicorn_app (exit status 127; not expected)
I understand that exit code 127 means "command not found" but I can execute the exact same command on the command line.
Try to use absolute path.
/home/path/to/app
not ~/path/to/app
As you rightly said, this code means "command not found" which could be as a result of either:
Supervisor not being able to locate the configuration file
Wrong configuration
Whatever the case, i will recommend you in:
case 1:
make sure you provide the absolute path(full path) path to you gunicorn.conf.py file(.e.g. /home/user/path/to/gunicorn.conf.py)
case 2:
revisit your supervisor configuration file and try to determine where the error may arise. The best way to do so is to locate the log file and open it to verify the cause. In other to facilitate this, I recommend to add to your supervisord.conf file the following:
[program:gunicorn]
# where the configuation file is located on the /home/<user>/path/to/configuration_file
command=/usr/local/bin/gunicorn app.wsgi:application -c /home/<user>/path/to/gunicorn.conf.py
directory=~/path/to/app
autostart=true
autorestart=true
#add this setting to log error
stderr_logfile=/var/log/gunicorn.err.log
stdout_logfile=/var/log/gunicorn.out.log
environment=LANG="en_US.UTF-8",LC_ALL="en_US.UTF-8",LC_LANG="en_US.UTF-8"
NB: The assumption i am making here is that you want to deploy or run a django application using gunicorn. Any error encountered when you lunch your server can be verifying in the gunicorn.err.log file.
Related
I am getting the below error when i try to run the docker container for postfix
2020-05-29 08:49:05,837 CRIT Supervisor is running as root. Privileges were not dropped because no user is specified in the config file. If you intend to run as root, you can set user=root in the config file to avoid this message.
2020-05-29 08:49:05,837 INFO Included extra file "/etc/supervisor/conf.d/supervisord.conf" during parsing
2020-05-29 08:49:05,844 INFO RPC interface 'supervisor' initialized
2020-05-29 08:49:05,844 CRIT Server 'unix_http_server' running without any HTTP authentication checking
2020-05-29 08:49:05,844 INFO supervisord started with pid 17
2020-05-29 08:49:06,852 INFO spawned: 'postfix' with pid 19
2020-05-29 08:49:06,856 INFO spawnerr: can't find command 'rsyslogd'
2020-05-29 08:49:07,167 INFO exited: postfix (exit status 1; not expected)
2020-05-29 08:49:08,172 INFO spawned: 'postfix' with pid 136
2020-05-29 08:49:08,174 INFO spawnerr: can't find command 'rsyslogd'
2020-05-29 08:49:08,219 INFO exited: postfix (exit status 1; not expected)
2020-05-29 08:49:10,230 INFO spawned: 'postfix' with pid 151
2020-05-29 08:49:10,233 INFO spawnerr: can't find command 'rsyslogd'
2020-05-29 08:49:10,274 INFO exited: postfix (exit status 1; not expected)
2020-05-29 08:49:13,283 INFO spawned: 'postfix' with pid 166
2020-05-29 08:49:13,286 INFO spawnerr: can't find command 'rsyslogd'
2020-05-29 08:49:13,286 INFO gave up: rsyslog entered FATAL state, too many start retries too quickly
2020-05-29 08:49:13,325 INFO exited: postfix (exit status 1; not expected)
2020-05-29 08:49:14,330 INFO gave up: postfix entered FATAL state, too many start retries too quickly
The block corresponding to the above is
command=/usr/sbin/rsyslogd -n -c3
Kindly help
Thanks,
Suv
Please provide the container config you are using to run.
Coming to what I understood was, I guess rsyslog is not installed in the container, so please install it before using it. If you are on a Debian or ubuntu container use the below commands to install it in the container.
add-apt-repository ppa:adiscon/v8-stable
apt-get install rsyslog
In dockerfile:
RUN add-apt-repository ppa:adiscon/v8-stable && \
apt-get -y install rsyslog
I'm running a django application inside a container using supervisord.
But sometimes i need to view the log to fix some errors and i could'nt find a way to do it.
I tried to add an stdout_logfile and stderr_logfile but always the err logfile is empty
this is my supervisor.conf
[supervisord]
loglevel=info
logfile=/tmp/supervisord.log
[program:myapp]
command = python3 -u /usr/src/app/manage.py runserver 0.0.0.0:8000
stdout_logfile=/usr/src/app/out.log
stderr_logfile=/usr/src/app/err.log
And always the same result, the out.log file will contain the lines before the exception happen and the err.log won't be created
This is the output that i get when i run docker compose
2020-05-13 17:33:44,140 INFO supervisord started with pid 1
2020-05-13 17:33:45,144 INFO spawned: 'myapp' with pid 9
2020-05-13 17:33:46,201 INFO success: myapp entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
After a big struggling i found the log is being buffered, so the solution is by adding environment = PYTHONUNBUFFERED=1 to the supervisor.conf file
my conf file after modification
[supervisord]
loglevel=info
logfile=/tmp/supervisord.log
[program:myapp]
command = python3 -u /usr/src/app/manage.py runserver 0.0.0.0:8000
environment = PYTHONUNBUFFERED=1
stdout_logfile=/usr/src/app/out.log
stderr_logfile=/usr/src/app/err.log
I've been trying to follow this thorough explanation on how to deploy a django app with celery worker to aws elastic beanstalk:
How to run a celery worker with Django app scalable by AWS Elastic Beanstalk?
I had some problems installing pycurl but solved it with the comment in:
Pip Requirements.txt --global-option causing installation errors with other packages. "option not recognized"
Then i got:
[2019-01-26T06:43:04.865Z] INFO [12249] - [Application update app-190126_134200#28/AppDeployStage0/EbExtensionPostBuild/Infra-EmbeddedPostBuild/postbuild_1_raiseflags/Command 05_celery_tasks_run] : Activity execution failed, because: /usr/bin/env: bash
: No such file or directory
(ElasticBeanstalk::ExternalInvocationError)
But also solved it: it turns out I had to convert "celery_configuration.txt" file to UNIX EOL (i'm using Windows, and Notepad++ automatically converted it to Windows EOL).
With all these modifications I can successfully deploy the project. But the problem is that the periodic tasks are not running.
I get:
2019-01-26 09:12:57,337 INFO exited: celeryd-beat (exit status 1; not expected)
2019-01-26 09:12:58,583 INFO spawned: 'celeryd-worker' with pid 25691
2019-01-26 09:12:59,453 INFO spawned: 'celeryd-beat' with pid 25695
2019-01-26 09:12:59,666 INFO exited: celeryd-worker (exit status 1; not expected)
2019-01-26 09:13:00,790 INFO spawned: 'celeryd-worker' with pid 25705
2019-01-26 09:13:00,791 INFO exited: celeryd-beat (exit status 1; not expected)
2019-01-26 09:13:01,915 INFO exited: celeryd-worker (exit status 1; not expected)
2019-01-26 09:13:03,919 INFO spawned: 'celeryd-worker' with pid 25728
2019-01-26 09:13:03,920 INFO spawned: 'celeryd-beat' with pid 25729
2019-01-26 09:13:05,985 INFO exited: celeryd-worker (exit status 1; not expected)
2019-01-26 09:13:06,091 INFO exited: celeryd-beat (exit status 1; not expected)
2019-01-26 09:13:07,092 INFO gave up: celeryd-beat entered FATAL state, too many start retries too quickly
2019-01-26 09:13:09,096 INFO spawned: 'celeryd-worker' with pid 25737
2019-01-26 09:13:10,084 INFO exited: celeryd-worker (exit status 1; not expected)
2019-01-26 09:13:11,085 INFO gave up: celeryd-worker entered FATAL state, too many start retries too quickly
I also have this part of the logs:
[2019-01-26T09:13:00.583Z] INFO [25247] - [Application update app-190126_161213#43/AppDeployStage1/AppDeployPostHook/run_supervised_celeryd.sh] : Completed activity. Result:
[program:celeryd-worker]
; Set full path to celery program if using virtualenv
command=/opt/python/run/venv/bin/celery worker -A raiseflags --loglevel=INFO
directory=/opt/python/current/app
user=nobody
numprocs=1
stdout_logfile=/var/log/celery-worker.log
stderr_logfile=/var/log/celery-worker.log
autostart=true
autorestart=true
startsecs=10
; Need to wait for currently executing tasks to finish at shutdown.
; Increase this if you have very long running tasks.
stopwaitsecs = 600
; When resorting to send SIGKILL to the program to terminate it
; send SIGKILL to its whole process group instead,
; taking care of its children as well.
killasgroup=true
; if rabbitmq is supervised, set its priority higher
; so it starts first
priority=998
environment=PYTHONPATH="/opt/python/current/app/:",PATH="/opt/python/run/venv/bin/:%%(ENV_PATH)s",RDS_PORT="5432",RDS_DB_NAME="ebdb",RDS_USERNAME="foobar",PYCURL_SSL_LIBRARY="nss",DJANGO_SETTINGS_MODULE="raiseflags.settings",RDS_PASSWORD="foobar",RDS_HOSTNAME="something.something.eu-west-1.rds.amazonaws.com"
[program:celeryd-beat]
; Set full path to celery program if using virtualenv
command=/opt/python/run/venv/bin/celery beat -A raiseflags --loglevel=INFO --workdir=/tmp -S django --pidfile /tmp/celerybeat.pid
directory=/opt/python/current/app
user=nobody
numprocs=1
stdout_logfile=/var/log/celery-beat.log
stderr_logfile=/var/log/celery-beat.log
autostart=true
autorestart=true
startsecs=10
; Need to wait for currently executing tasks to finish at shutdown.
; Increase this if you have very long running tasks.
stopwaitsecs = 600
; When resorting to send SIGKILL to the program to terminate it
; send SIGKILL to its whole process group instead,
; taking care of its children as well.
killasgroup=true
; if rabbitmq is supervised, set its priority higher
; so it starts first
priority=998
environment=PYTHONPATH="/opt/python/current/app/:",PATH="/opt/python/run/venv/bin/:%%(ENV_PATH)s",RDS_PORT="5432",RDS_DB_NAME="ebdb",RDS_USERNAME="puigdemontAWS",PYCURL_SSL_LIBRARY="nss",DJANGO_SETTINGS_MODULE="raiseflags.settings",RDS_PASSWORD="holahola",RDS_HOSTNAME="aa1m59206y4fljn.cdreg3t50bbl.eu-west-1.rds.amazonaws.com"
No config updates to processes
celeryd-beat: ERROR (not running)
celeryd-beat: ERROR (abnormal termination)
celeryd-worker: ERROR (not running)
celeryd-worker: ERROR (abnormal termination)
[2019-01-26T09:13:00.583Z] INFO [25247] - [Application update app-190126_161213#43/AppDeployStage1/AppDeployPostHook] : Completed activity. Result:
Successfully execute hooks in directory /opt/elasticbeanstalk/hooks/appdeploy/post.
[2019-01-26T09:13:00.583Z] INFO [25247] - [Application update app-190126_161213#43/AppDeployStage1] : Completed activity. Result:
Application version switch - Command CMD-AppDeploy stage 1 completed
[2019-01-26T09:13:00.583Z] INFO [25247] - [Application update app-190126_161213#43/AddonsAfter] : Starting activity...
[2019-01-26T09:13:00.583Z] INFO [25247] - [Application update app-190126_161213#43/AddonsAfter/ConfigLogRotation] : Starting activity...
[2019-01-26T09:13:00.583Z] INFO [25247] - [Application update app-190126_161213#43/AddonsAfter/ConfigLogRotation/10-config.sh] : Starting activity...
[2019-01-26T09:13:00.756Z] INFO [25247] - [Application update app-190126_161213#43/AddonsAfter/ConfigLogRotation/10-config.sh] : Completed activity. Result:
Disabled forced hourly log rotation.
[2019-01-26T09:13:00.756Z] INFO [25247] - [Application update app-190126_161213#43/AddonsAfter/ConfigLogRotation] : Completed activity. Result:
Successfully execute hooks in directory /opt/elasticbeanstalk/addons/logpublish/hooks/config.
I don't know if it has something to do with the error, but notice above the line [[ PATH="/opt/python/run/venv/bin/:%%(ENV_PATH)s" ]] --> shouldn't ENV_PATH be something else?:
environment=PYTHONPATH="/opt/python/current/app/:",PATH="/opt/python/run/venv/bin/:%%(ENV_PATH)s",RDS_PORT="5432",RDS_DB_NAME="ebdb",RDS_USERNAME="foobar",PYCURL_SSL_LIBRARY="nss",DJANGO_SETTINGS_MODULE="raiseflags.settings",RDS_PASSWORD="foobar",RDS_HOSTNAME="something.something.eu-west-1.rds.amazonaws.com"
I'ts my first time deploying an app with celery, and i'm really lost to be honest. I fought a lot to solve the first two errors (i'm really amateur), and now that i get this I don't even know where to start.
Also, i'm not sure if I'm using "celery_configuration.txt" the right way. The only thing I edited was the 2 places where it says "django_app", which I changed for "raiseflags" (the name of my django project). Is this correct?
Does anyone know how to solve it? I can paste my files if needed, but they are just like the ones provided in the first link. I'm using Windows.
Thank you very much!
Ok, the problem had nothing to do with the PATH line I was referring to. I just had to add 'django_celery_beat' and 'django_celery_results' in INSTALLED_APPS in my settings.py
The connection error I later referred to talking to Fran was because I needed to set BROKER_URL instead of CELERY_BROKER_URL, also in the settings.py file. I guess this had to do with me not specifying 'CELERY' as the namespace in the app.autodiscover_tasks() in celery.py file (although in the linked question they do it, i didn't do it because i was using a different version of celery).
Thanks to Fran for everything, specially for pointing out that i should review the celery error logs. I didn't know how to do it. If any other amateur is also struggling, know that you have to "eb ssh" to your instance and then "tail -n 40 /var/log/celery-worker.log" and ""tail -n 40 /var/log/celery-beat.log" (where "40" is the number of lines you want to read). I know this sounds obvious to a lot of people but, stupid me, I had no clue.
(btw, i'm still struggling with a problem with the celery worker, that can't find pycurl module, but this has nothing to do with this question).
Referring to the line you pointed out where appears
environment=PYTHONPATH="/opt/python/current/app/:",PATH="/opt/python/run/venv/bin/:%%(ENV_PATH)s",RDS_PORT="5432",RDS_DB_NAME="ebdb",RDS_USERNAME="foobar",PYCURL_SSL_LIBRARY="nss",DJANGO_SETTINGS_MODULE="raiseflags.settings",RDS_PASSWORD="foobar",RDS_HOSTNAME="something.something.eu-west-1.rds.amazonaws.com", do you copy this line from somewhere? Because I don't see it in the link you posted.
In the linked answer was environment=$celeryenv, where $celeryenv was defined as
celeryenv=`cat /opt/python/current/env | tr '\n' ',' | sed 's/export //g' | sed 's/$PATH/%(ENV_PATH)s/g' | sed 's/$PYTHONPATH//g' | sed 's/$LD_LIBRARY_PATH//g' | sed 's/%/%%/g'`
celeryenv=${celeryenv%?}```
Log details are below for the supervisor.log file. Below error comes when I restart the supervisor on cent OS7
2018-02-01 17:48:02,392 INFO spawnerr: can't find command
'/var/www/laravel/laravel-echo-server' 2018-02-01 17:48:03,393 INFO
success: laravel-queue-listener entered RUNNING state, process has
stayed up for > than 1 seconds (startsecs) 2018-02-01 17:48:03,394
INFO spawnerr: can't find command
'/var/www/laravel/laravel-echo-server' 2018-02-01 17:48:05,396 INFO
spawnerr: can't find command '/var/www/laravel/laravel-echo-server'
2018-02-01 17:48:08,401 INFO spawnerr: can't find command
'/var/www/laravel/laravel-echo-server' 2018-02-01 17:48:08,401 INFO
gave up: laravel-worker entered FATAL state, too many start retries
too quickly
More about the issue
I accessed server using putty and ran the command manually laravel-echo-server start and everything works but why it does not work if run the same command using supervisor file with below code and restarts supervisor...Here is the screenshot when I try to run the laravel-echo-server manually using putty. But this is of no use when the putty is closed...laravel-echo-server gets off also.
Command Details for laravel-echo-server are below present in the supervisor file
[program:laravel-worker]
command=/var/www/laravel/laravel-echo-server start
autostart=true
user=root
autorestart=true
stdout_logfile=/var/www/laravel/storage/logs/echoserver.log
You can check below that the laravel-echo-server is already installed on the server..
Update - 1
Using command - which laravel-echo-server, it is found out that the path of laravel-echo-server is /usr/bin/laravel-echo-server
When I entered in the above mentioned directory and tried to run the command manually laravel-echo-server start , it gave an error Message that laravel-echo-server.json file is missing. I manually added and updated the url(authHost and allowOrigin). Finally, I stopped the command that I ran manually and added the correct path in supervisor file. Now it is like below.
[program:echo-server]
command=/usr/bin/laravel-echo-server start
autostart=true
user=root
autorestart=true
stdout_logfile=/var/www/laravel/storage/logs/echoserver.log
Then I restarted the supervisor and got the below supervisor logs.
2018-02-09 07:19:31,674 INFO success: echo-server entered RUNNING
state, process has stayed up for > than 1 seconds (startsecs)
2018-02-09 07:19:31,715 INFO exited: echo-server (exit status 0;
expected) 2018-02-09 07:19:32,718 INFO spawned: 'echo-server' with pid
2286 2018-02-09 07:19:33,648 INFO exited: echo-server (exit status 0;
not expected) 2018-02-09 07:19:34,652 INFO spawned: 'echo-server' with
pid 2296 2018-02-09 07:19:35,545 INFO exited: echo-server (exit status
0; not expected) 2018-02-09 07:19:37,550 INFO spawned: 'echo-server'
with pid 2306 2018-02-09 07:19:38,446 INFO exited: echo-server (exit
status 0; not expected) 2018-02-09 07:19:41,451 INFO spawned:
'echo-server' with pid 2317 2018-02-09 07:19:42,299 INFO exited:
echo-server (exit status 0; not expected) 2018-02-09 07:19:43,301 INFO
gave up: echo-server entered FATAL state, too many start retries too
quickly
I am still facing same 404 error of socket.io/socket.io.js
You have two issues. One is the path of the echo server executable and one is the working directory. You need to use below config
[program:echo-server]
command=/usr/bin/laravel-echo-server start
dierctory=/var/www/laravel
autostart=true
user=root
autorestart=true
stdout_logfile=/var/www/laravel/storage/logs/echoserver.log
This should now help you fix the issues
I am using Supervisor to manage celery. The supervisor config file for celery contains the following 2 entries:
stdout_logfile = /var/log/supervisor/celery.log
stderr_logfile = /var/log/supervisor/celery_err.log
What is confusing me is that although Celery is working properly and all tasks are being successfully completed, they are all written to celery_err.log. I thought that would only be for errors. The celery.log file only shows the normal celery startup info. Is the behaviour of writing successful task completion to the error log correct?
Note - tasks are definitely completing successfully (emails being sent, db entries made etc).
I met the same phenomenon just like you.This is because of celery's loging mechanism. Just see the setup_task_loggers method of celery logger source.
If logfile is not specified, then sys.stderr is used.
So, clear enough? Celery uses sys.stderr when logfile is not specified.
Solution:
You can use supervisord redirect_stderr = true flag to combine two log files into one. I'm using this one.
Config the celery logfile option.
Is the behaviour of writing successful task completion to the error log correct?
No, its not. I am having same setup and logging is working fine.
celery.log has task info
[2015-07-23 11:40:07,066: INFO/MainProcess] Received task: foo[b5a6e0e8-1027-4005-b2f6-1ea032c73d34]
[2015-07-23 11:40:07,494: INFO/MainProcess] Task foo[b5a6e0e8-1027-4005-b2f6-1ea032c73d34] succeeded in 0.424549156s: 1
celery_err.log has some warnings/errors. Try restarting supervisor process.
One issue you could be having is that python buffers output by default
In your supervisor file, you can disable this by setting the PYTHONUNBUFFERED environment variable, see below my Django example supervisor file
[program:celery-myapp]
environment=DJANGO_SETTINGS_MODULE="myapp.settings.production",PYTHONUNBUFFERED=1
command=/home/me/.virtualenvs/myapp/bin/celery -A myapp worker -B -l DEBUG
directory=/home/me/www/saleor
user=me
stdout_logfile=/home/me/www/myapp/log/supervisor-celery.log
stderr_logfile=/home/me/www/myapp/log/supervisor-celery-err.log
autostart=true
autorestart=true
startsecs=10