I am trying to daemonize my celery/redis workers on Ubuntu 18.04 and I am making progress! Celery is now running, but it does appear to be communicating with my django app. I found that removing the directive Type=forking from the celery.service file, celery started working.
# systemctl status celery.service
● celery.service - Celery Service
Loaded: loaded (/etc/systemd/system/celery.service; enabled; vendor preset: enabled)
Active: active (running) since Thu 2020-12-17 18:35:19 MST; 1min 52s ago
Main PID: 21509 (code=exited, status=1/FAILURE)
Tasks: 0 (limit: 4915)
CGroup: /system.slice/celery.service
Dec 17 18:35:17 t-rex systemd[1]: Starting Celery Service...
Dec 17 18:35:19 t-rex sh[24331]: celery multi v4.3.0 (rhubarb)
Dec 17 18:35:19 t-rex sh[24331]: > Starting nodes...
Dec 17 18:35:19 t-rex sh[24331]: > w1#t-rex: OK
Dec 17 18:35:19 t-rex sh[24331]: > w2#t-rex: OK
Dec 17 18:35:19 t-rex sh[24331]: > w3#t-rex: OK
Dec 17 18:35:19 t-rex systemd[1]: Started Celery Service.
When I test celery from the python prompt in my apps virtualenv the test fails. This is the test I use in my app before I call a celery task.
>>> celery_app.control.broadcast('ping', reply=True, limit=1)
[]
My celery.service file (straight from the celery docs) with a few local changes.
[Unit]
Description=Celery Service
After=network.target redis.service
Requires=redis.service
[Service]
#Type=forking
User=www-data
Group=www-data
EnvironmentFile=/etc/conf.d/celery
WorkingDirectory=/home/mark/python-projects/archive
ExecStart=/bin/sh -c '${CELERY_BIN} -A $CELERY_APP multi start $CELERYD_NODES \
--pidfile=${CELERYD_PID_FILE} --logfile=${CELERYD_LOG_FILE} \
--loglevel="${CELERYD_LOG_LEVEL}" $CELERYD_OPTS'
ExecStop=/bin/sh -c '${CELERY_BIN} multi stopwait $CELERYD_NODES \
--pidfile=${CELERYD_PID_FILE} --loglevel="${CELERYD_LOG_LEVEL}"'
ExecReload=/bin/sh -c '${CELERY_BIN} -A $CELERY_APP multi restart $CELERYD_NODES \
--pidfile=${CELERYD_PID_FILE} --logfile=${CELERYD_LOG_FILE} \
--loglevel="${CELERYD_LOG_LEVEL}" $CELERYD_OPTS'
Restart=always
[Install]
WantedBy=multi-user.target
and my environment file (also from the same celery docs):
# Name of nodes to start
# here we have a single node
CELERYD_NODES="w1 w2 w3"
# or we could have three nodes:
#CELERYD_NODES="w1 w2 w3"
# Absolute or relative path to the 'celery' command:
CELERY_BIN="/home/mark/.virtualenvs/archive/bin/celery"
#CELERY_BIN="/virtualenvs/def/bin/celery"
# App instance to use
# comment out this line if you don't use an app
CELERY_APP="MemorabiliaJSON"
# or fully qualified:
#CELERY_APP="proj.tasks:app"
# How to call manage.py
CELERYD_MULTI="multi"
# Extra command-line arguments to the worker
CELERYD_OPTS="--time-limit=300 --concurrency=8"
# - %n will be replaced with the first part of the nodename.
# - %I will be replaced with the current child process index
# and is important when using the prefork pool to avoid race conditions.
CELERYD_PID_FILE="/var/run/celery/%n.pid"
CELERYD_LOG_FILE="/var/log/celery/%n%I.log"
CELERYD_LOG_LEVEL="DEBUG"
The redis server is running, so that not be the issue. I am not sure if redis is talking to my daemonized celery or not.
I start celery with "celery -A MemorabiliaJSON worker -l debug" when using django runserver, and I am not sure if my daemonized celery needs something else to make it talk to my django apps.
Is there any magic needed to get django/apache/wsgi to work with daemonized celery? There is nothing in the celery log files when I try my test above.
Thanks for any assistance you can give me in debugging this problem!
Mark
Related
I'm running a Django app with uWSGI in Docker with docker-compose. I get the same error every time I:
Send a POST request with AJAX
In handling said request in my view, I use python's requests module, i.e. r = requests.get(some_url)
uWSGI says the following:
!!! uWSGI process 13 got Segmentation Fault !!!
DAMN ! worker 1 (pid: 13) died :( trying respawn ...
Respawned uWSGI worker 1 (new pid: 24)
spawned 4 offload threads for uWSGI worker 1
The console in the browser says net::ERR_EMPTY_RESPONSE
I've tried using the requests module in different places, and wherever I put it I get the same Segmentation Fault error. I'm also able to run everything fine outside of docker with no errors, so I've narrowed it down to: docker + requests module = errror.
Is there something that could be blocking the requests sent with the requests module from within the docker container? Thanks in advance for your help.
Here's my uwsgi.ini file:
[uwsgi]
chdir = %d
module = my_project.wsgi:application
master = true
processes = 2
http = 0.0.0.0:8000
vacuum = true
pidfile = /tmp/my_project.pid
daemonize = %d/my_project.log
check-static = %d
static-expires = /* 7776000
offload-threads = %k
uid = 1000
gid = 1000
# there is no /etc/mime.types on the docker Arch Linux image
mime-file = %d/mime.types
Dockerfile:
FROM alpine:3.8
ENV PYTHONUNBUFFERED 1
RUN mkdir /my_project
WORKDIR /my_project
RUN apk add build-base python3-dev py3-pip python3
# deps for python cryptography
RUN apk add libffi-dev musl-dev openssl-dev
# dep for uwsgi
RUN apk add linux-headers
ADD requirements.txt /my_project/
RUN pip3 install -r requirements.txt
ADD . /my_project/
ENTRYPOINT ./start.sh
docker-compose.yml:
version: '3'
services:
web:
build: .
entrypoint: ./start.sh
volumes:
- .:/my_project
ports:
- "8000:8000"
environment:
- DEBUG_LEVEL=INFO
network_mode: "host"
start.sh:
#!/bin/sh
echo '' > logfile.log
uwsgi --ini uwsgi.ini
tail -f logfile.log
Solution: Change base image to Ubuntu 16.04 and everything works fine now.
I use celery (jobs manager) on prod mode for a website (Django) on a centos7 server.
My problem is that in a celery task my function did not create folder (see my_function).
the function
def my_fucntion():
parent_folder = THE_PARENT_PATH
if not os.path.exists(centrifuge_recentrifuge_work_dir_path):
os.makedirs(centrifuge_recentrifuge_work_dir_path)
# The folder THE_PARENT_PATH is created
celery_task(parent_folder)
the celery task
#app.task(name='a task')
def celery_task(parent_folder):
import getpass; print("permission : ", getpass.getuser())
# permission : apache
path_1 = os.path.join(parent_folder, "toto")
if not os.path.exists(path_1):
os.makedirs(path_1)
# The folder path_1 is NOT created
..... some others instructions...
# Singularity image run (needed the path_1 folder)
I use Supervisord for daemonization of celery.
celery.init
[program:sitecelery]
command=/etc/supervisord.d/celery.sh
directory=/mnt/site/
user=apache
numprocs=1
stdout_logfile=/var/log/celery/worker.log
stderr_logfile=/var/log/celery/worker.log
autostart=true
autorestart=true
priority=999
The folder path_1 is created when user=root but i want that it was not rot but apache user.
celery.sh
#!/bin/bash
cd /mnt/site/
exec ../myenv/bin/python3 -m celery -A site.celery_settings worker -l info --autoscale 20
sudo systemctl status supervisord
● supervisord.service - Process Monitoring and Control Daemon
Loaded: loaded (/usr/lib/systemd/system/supervisord.service; disabled; vendor preset: disabled)
Active: active (running) since lun. 2018-10-15 09:09:05 CEST; 4min 59s ago
Process: 61477 ExecStart=/usr/bin/supervisord -c /etc/supervisord.conf (code=exited, status=0/SUCCESS)
Main PID: 61480 (supervisord)
CGroup: /system.slice/supervisord.service
├─61480 /usr/bin/python /usr/bin/supervisord -c /etc/supervisord.conf
└─61491 ../myenv/bin/python3 -m celery -A Site_CNR.celery_settings worker -l info --autoscale 20
oct. 15 09:09:05 web01 systemd[1]: Starting Process Monitoring and Control Daemon...
oct. 15 09:09:05 web01 systemd[1]: Started Process Monitoring and Control Daemon.
oct. 15 09:09:17 web01 Singularity[61669]: action-suid (U=48,P=61669)> Home directory is not owned by calling user: /usr/share/httpd
oct. 15 09:09:17 web01 Singularity[61669]: action-suid (U=48,P=61669)> Retval = 255
oct. 15 09:09:17 web01 Singularity[61678]: action-suid (U=48,P=61678)> Home directory is not owned by calling user: /usr/share/httpd
oct. 15 09:09:17 web01 Singularity[61678]: action-suid (U=48,P=61678)> Retval = 255
EDIT 1 os.makedirs
In the celery tasks :
if not os.path.exists(path_1):
print("test")
# test
print(os.makedirs(path_1))
# None
os.makedirs(path_1)
The os.makedirs return None :/
I dont know why but with this correction on a post error of this problem with a sudo chown -R apache:apache /usr/share/httpd/ resolve this problem oO
How to use Django with AWS Elastic Beanstalk that would also run tasks by celery on main node only?
This is how I set up celery with django on elastic beanstalk with scalability working fine.
Please keep in mind that 'leader_only' option for container_commands works only on environment rebuild or deployment of the App. If service works long enough, leader node may be removed by Elastic Beanstalk. To deal with that, you may have to apply instance protection for your leader node. Check: http://docs.aws.amazon.com/autoscaling/latest/userguide/as-instance-termination.html#instance-protection-instance
Add bash script for celery worker and beat configuration.
Add file root_folder/.ebextensions/files/celery_configuration.txt:
#!/usr/bin/env bash
# Get django environment variables
celeryenv=`cat /opt/python/current/env | tr '\n' ',' | sed 's/export //g' | sed 's/$PATH/%(ENV_PATH)s/g' | sed 's/$PYTHONPATH//g' | sed 's/$LD_LIBRARY_PATH//g' | sed 's/%/%%/g'`
celeryenv=${celeryenv%?}
# Create celery configuraiton script
celeryconf="[program:celeryd-worker]
; Set full path to celery program if using virtualenv
command=/opt/python/run/venv/bin/celery worker -A django_app --loglevel=INFO
directory=/opt/python/current/app
user=nobody
numprocs=1
stdout_logfile=/var/log/celery-worker.log
stderr_logfile=/var/log/celery-worker.log
autostart=true
autorestart=true
startsecs=10
; Need to wait for currently executing tasks to finish at shutdown.
; Increase this if you have very long running tasks.
stopwaitsecs = 600
; When resorting to send SIGKILL to the program to terminate it
; send SIGKILL to its whole process group instead,
; taking care of its children as well.
killasgroup=true
; if rabbitmq is supervised, set its priority higher
; so it starts first
priority=998
environment=$celeryenv
[program:celeryd-beat]
; Set full path to celery program if using virtualenv
command=/opt/python/run/venv/bin/celery beat -A django_app --loglevel=INFO --workdir=/tmp -S django --pidfile /tmp/celerybeat.pid
directory=/opt/python/current/app
user=nobody
numprocs=1
stdout_logfile=/var/log/celery-beat.log
stderr_logfile=/var/log/celery-beat.log
autostart=true
autorestart=true
startsecs=10
; Need to wait for currently executing tasks to finish at shutdown.
; Increase this if you have very long running tasks.
stopwaitsecs = 600
; When resorting to send SIGKILL to the program to terminate it
; send SIGKILL to its whole process group instead,
; taking care of its children as well.
killasgroup=true
; if rabbitmq is supervised, set its priority higher
; so it starts first
priority=998
environment=$celeryenv"
# Create the celery supervisord conf script
echo "$celeryconf" | tee /opt/python/etc/celery.conf
# Add configuration script to supervisord conf (if not there already)
if ! grep -Fxq "[include]" /opt/python/etc/supervisord.conf
then
echo "[include]" | tee -a /opt/python/etc/supervisord.conf
echo "files: celery.conf" | tee -a /opt/python/etc/supervisord.conf
fi
# Reread the supervisord config
supervisorctl -c /opt/python/etc/supervisord.conf reread
# Update supervisord in cache without restarting all services
supervisorctl -c /opt/python/etc/supervisord.conf update
# Start/Restart celeryd through supervisord
supervisorctl -c /opt/python/etc/supervisord.conf restart celeryd-beat
supervisorctl -c /opt/python/etc/supervisord.conf restart celeryd-worker
Take care about script execution during deployment, but only on main node (leader_only: true).
Add file root_folder/.ebextensions/02-python.config:
container_commands:
04_celery_tasks:
command: "cat .ebextensions/files/celery_configuration.txt > /opt/elasticbeanstalk/hooks/appdeploy/post/run_supervised_celeryd.sh && chmod 744 /opt/elasticbeanstalk/hooks/appdeploy/post/run_supervised_celeryd.sh"
leader_only: true
05_celery_tasks_run:
command: "/opt/elasticbeanstalk/hooks/appdeploy/post/run_supervised_celeryd.sh"
leader_only: true
Beat is configurable without need of redeployment, with separate django applications: https://pypi.python.org/pypi/django_celery_beat.
Storing task results is good idea to: https://pypi.python.org/pypi/django_celery_beat
File requirements.txt
celery==4.0.0
django_celery_beat==1.0.1
django_celery_results==1.0.1
pycurl==7.43.0 --global-option="--with-nss"
Configure celery for Amazon SQS broker
(Get your desired endpoint from list: http://docs.aws.amazon.com/general/latest/gr/rande.html)
root_folder/django_app/settings.py:
...
CELERY_RESULT_BACKEND = 'django-db'
CELERY_BROKER_URL = 'sqs://%s:%s#' % (aws_access_key_id, aws_secret_access_key)
# Due to error on lib region N Virginia is used temporarily. please set it on Ireland "eu-west-1" after fix.
CELERY_BROKER_TRANSPORT_OPTIONS = {
"region": "eu-west-1",
'queue_name_prefix': 'django_app-%s-' % os.environ.get('APP_ENV', 'dev'),
'visibility_timeout': 360,
'polling_interval': 1
}
...
Celery configuration for django django_app app
Add file root_folder/django_app/celery.py:
from __future__ import absolute_import, unicode_literals
import os
from celery import Celery
# set the default Django settings module for the 'celery' program.
os.environ.setdefault('DJANGO_SETTINGS_MODULE', 'django_app.settings')
app = Celery('django_app')
# Using a string here means the worker don't have to serialize
# the configuration object to child processes.
# - namespace='CELERY' means all celery-related configuration keys
# should have a `CELERY_` prefix.
app.config_from_object('django.conf:settings', namespace='CELERY')
# Load task modules from all registered Django app configs.
app.autodiscover_tasks()
Modify file root_folder/django_app/__init__.py:
from __future__ import absolute_import, unicode_literals
# This will make sure the app is always imported when
# Django starts so that shared_task will use this app.
from django_app.celery import app as celery_app
__all__ = ['celery_app']
Check also:
How do you run a worker with AWS Elastic Beanstalk? (solution without scalability)
Pip Requirements.txt --global-option causing installation errors with other packages. "option not recognized" (solution for problems coming from obsolate pip on elastic beanstalk that cannto deal with global options for properly solving pycurl dependency)
This is how I extended the answer by #smentek to allow for multiple worker instances and a single beat instance - same thing applies where you have to protect your leader. (I still don't have an automated solution for that yet).
Please note that envvar updates to EB via the EB cli or the web interface are not relflected by celery beat or workers until app server restart has taken place. This caught me off guard once.
A single celery_configuration.sh file outputs two scripts for supervisord, note that celery-beat has autostart=false, otherwise you end up with many beats after an instance restart:
# get django environment variables
celeryenv=`cat /opt/python/current/env | tr '\n' ',' | sed 's/export //g' | sed 's/$PATH/%(ENV_PATH)s/g' | sed 's/$PYTHONPATH//g' | sed 's/$LD_LIBRARY_PATH//g' | sed 's/%/%%/g'`
celeryenv=${celeryenv%?}
# create celery beat config script
celerybeatconf="[program:celeryd-beat]
; Set full path to celery program if using virtualenv
command=/opt/python/run/venv/bin/celery beat -A lexvoco --loglevel=INFO --workdir=/tmp -S django --pidfile /tmp/celerybeat.pid
directory=/opt/python/current/app
user=nobody
numprocs=1
stdout_logfile=/var/log/celery-beat.log
stderr_logfile=/var/log/celery-beat.log
autostart=false
autorestart=true
startsecs=10
; Need to wait for currently executing tasks to finish at shutdown.
; Increase this if you have very long running tasks.
stopwaitsecs = 10
; When resorting to send SIGKILL to the program to terminate it
; send SIGKILL to its whole process group instead,
; taking care of its children as well.
killasgroup=true
; if rabbitmq is supervised, set its priority higher
; so it starts first
priority=998
environment=$celeryenv"
# create celery worker config script
celeryworkerconf="[program:celeryd-worker]
; Set full path to celery program if using virtualenv
command=/opt/python/run/venv/bin/celery worker -A lexvoco --loglevel=INFO
directory=/opt/python/current/app
user=nobody
numprocs=1
stdout_logfile=/var/log/celery-worker.log
stderr_logfile=/var/log/celery-worker.log
autostart=true
autorestart=true
startsecs=10
; Need to wait for currently executing tasks to finish at shutdown.
; Increase this if you have very long running tasks.
stopwaitsecs = 600
; When resorting to send SIGKILL to the program to terminate it
; send SIGKILL to its whole process group instead,
; taking care of its children as well.
killasgroup=true
; if rabbitmq is supervised, set its priority higher
; so it starts first
priority=999
environment=$celeryenv"
# create files for the scripts
echo "$celerybeatconf" | tee /opt/python/etc/celerybeat.conf
echo "$celeryworkerconf" | tee /opt/python/etc/celeryworker.conf
# add configuration script to supervisord conf (if not there already)
if ! grep -Fxq "[include]" /opt/python/etc/supervisord.conf
then
echo "[include]" | tee -a /opt/python/etc/supervisord.conf
echo "files: celerybeat.conf celeryworker.conf" | tee -a /opt/python/etc/supervisord.conf
fi
# reread the supervisord config
/usr/local/bin/supervisorctl -c /opt/python/etc/supervisord.conf reread
# update supervisord in cache without restarting all services
/usr/local/bin/supervisorctl -c /opt/python/etc/supervisord.conf update
Then in container_commands we only restart beat on leader:
container_commands:
# create the celery configuration file
01_create_celery_beat_configuration_file:
command: "cat .ebextensions/files/celery_configuration.sh > /opt/elasticbeanstalk/hooks/appdeploy/post/run_supervised_celeryd.sh && chmod 744 /opt/elasticbeanstalk/hooks/appdeploy/post/run_supervised_celeryd.sh && sed -i 's/\r$//' /opt/elasticbeanstalk/hooks/appdeploy/post/run_supervised_celeryd.sh"
# restart celery beat if leader
02_start_celery_beat:
command: "/usr/local/bin/supervisorctl -c /opt/python/etc/supervisord.conf restart celeryd-beat"
leader_only: true
# restart celery worker
03_start_celery_worker:
command: "/usr/local/bin/supervisorctl -c /opt/python/etc/supervisord.conf restart celeryd-worker"
If someone is following smentek's answer and getting the error:
05_celery_tasks_run: /usr/bin/env bash does not exist.
know that, if you are using Windows, your problem might be that the "celery_configuration.txt" file has WINDOWS EOL when it should have UNIX EOL. If using Notepad++, open the file and click on "Edit > EOL Conversion > Unix (LF)". Save, redeploy, and error is no longer there.
Also, a couple of warnings for really-amateur people like me:
Be sure to include "django_celery_beat" and "django_celery_results" in your "INSTALLED_APPS" in settings.py file.
To check celery errors, connect to your instance with "eb ssh" and then "tail -n 40 /var/log/celery-worker.log" and "tail -n 40 /var/log/celery-beat.log" (where "40" refers to the number of lines you want to read from the file, starting from the end).
Hope this helps someone, it would've saved me some hours!
I'm using this guide to setting up an intranet server. Everything goes ok, the server works and I can checked it is working in my network.
But when I logout, I get 404 error.
The sock file is in the path indicated in gunicorn_start.
(cmi2014)javier#sgc:~/workspace/cmi/cmi$ ls -l run/
total 0
srwxrwxrwx 1 javier javier 0 mar 10 17:31 cmi.sock
Actually I can se the workers when listed the process list.
(cmi2014)javier#sgc:~/workspace/cmi/cmi$ ps aux | grep cmi
javier 17354 0.0 0.2 14652 8124 ? S 17:27 0:00 gunicorn: master [cmi]
javier 17365 0.0 0.3 18112 10236 ? S 17:27 0:00 gunicorn: worker [cmi]
javier 17366 0.0 0.3 18120 10240 ? S 17:27 0:00 gunicorn: worker [cmi]
javier 17367 0.0 0.5 36592 17496 ? S 17:27 0:00 gunicorn: worker [cmi]
javier 17787 0.0 0.0 4408 828 pts/0 S+ 17:55 0:00 grep --color=auto cmi
And supervisorctl responds that the process is running:
(cmi2014)javier#sgc:~/workspace/cmi/cmi$ sudo supervisorctl status cmi
[sudo] password for javier:
cmi RUNNING pid 17354, uptime 0:29:21
There is an error in nginx logs,
(cmi2014)javier#sgc:~/workspace/cmi/cmi$ tail logs/nginx-error.log
2014/03/10 17:38:57 [error] 17299#0: *19 connect() to
unix:/home/javier/workspace/cmi/cmi/run/cmi.sock failed (111: Connection refused) while
connecting to upstream, client: 10.69.0.174, server: , request: "GET / HTTP/1.1",
upstream: "http://unix:/home/javier/workspace/cmi/cmi/run/cmi.sock:/", host:
"10.69.0.68:2014"
Again, the error appears only when I logout or close session, but everything works fine when run or reload supervisor and stay connected.
By the way, ngnix, supervisor and gunicorn run on my uid.
Thanks in advance.
Edit Supervisor conf
[program:cmi]
command = /home/javier/entornos/cmi2014/bin/cmi_start
user = javier
stdout_logfile = /home/javier/workspace/cmi/cmi/logs/cmi_supervisor.log
redirect_stderr = true
autostart=true
autorestart=true
Gnunicor start script
#!/bin/bash
NAME="cmi" # Name of the application
DJANGODIR=/home/javier/workspace/cmi/cmi # Django project directory
SOCKFILE=/home/javier/workspace/cmi/cmi/run/cmi.sock # we will communicte using this unix socket
USER=javier # the user to run as
GROUP=javier # the group to run as
NUM_WORKERS=3 # how many worker processes should Gunicorn spawn
DJANGO_SETTINGS_MODULE=cmi.settings # which settings file should Django use
DJANGO_WSGI_MODULE=cmi.wsgi # WSGI module name
echo "Starting $NAME as `whoami`"
# Activate the virtual environment
cd $DJANGODIR
source /home/javier/entornos/cmi2014/bin/activate
export DJANGO_SETTINGS_MODULE=$DJANGO_SETTINGS_MODULE
export PYTHONPATH=$DJANGODIR:$PYTHONPATH
export CMI_SECRET_KEY='***'
export CMI_DATABASE_HOST='***'
export CMI_DATABASE_NAME='***'
export CMI_DATABASE_USER='***'
export CMI_DATABASE_PASS='***'
export CMI_DATABASE_PORT='3306'
# Create the run directory if it doesn't exist
RUNDIR=$(dirname $SOCKFILE)
test -d $RUNDIR || mkdir -p $RUNDIR
# Start your Django Unicorn
# Programs meant to be run under supervisor should not daemonize themselves (do not use --daemon)
exec /home/javier/entornos/cmi2014/bin/gunicorn ${DJANGO_WSGI_MODULE}:application --name $NAME --workers $NUM_WORKERS --user=$USER --group=$GROUP --log-level=debug --bind=unix:$SOCKFILE
I do not understand why I get an error about not finding my "project.wsgi" module when supervisor tries to start the app automatically (for example when the server is starting.)
2014-02-15 05:13:05 [1011] [INFO] Using worker: sync
2014-02-15 05:13:05 [1016] [INFO] Booting worker with pid: 1016
2014-02-15 05:13:05 [1016] [ERROR] Exception in worker process:
Traceback (most recent call last):
File "/var/local/sites/myproject/venv/local/lib/python2.7/site-packages/gunicorn/arbiter.py", line 495, in spawn_worker
worker.init_process()
File "/var/local/sites/myproject/venv/local/lib/python2.7/site-packages/gunicorn/workers/base.py", line 106, in init_process
self.wsgi = self.app.wsgi()
File "/var/local/sites/myproject/venv/local/lib/python2.7/site-packages/gunicorn/app/base.py", line 114, in wsgi
self.callable = self.load()
File "/var/local/sites/myproject/venv/local/lib/python2.7/site-packages/gunicorn/app/wsgiapp.py", line 62, in load
return self.load_wsgiapp()
File "/var/local/sites/myproject/venv/local/lib/python2.7/site-packages/gunicorn/app/wsgiapp.py", line 49, in load_wsgiapp
return util.import_app(self.app_uri)
File "/var/local/sites/myproject/venv/local/lib/python2.7/site-packages/gunicorn/util.py", line 354, in import_app
__import__(module)
ImportError: No module named myproject.wsgi
Whereas I do not get this error and it works fine when I manually do:
sudo supervisorctl start myapp
What is different?
Thanks
UPDATE:
supervisor conf file:
[program:myproject]
command=/var/local/sites/myproject/run/gunicorn_start ; Command to start app
user=myproject ; User to run as
autostart=true
autorestart=true
loglevel=info
redirect_stderr=false
stdout_logfile=/var/local/sites/myproject/logs/supervisor-myproject-stdout.log
stdout_logfile_maxbytes=1MB
stdout_logfile_backups=10
stdout_capture_maxbytes=1MB
stderr_logfile=/var/local/sites/myproject/logs/supervisor-myproject-stderr.log
stderr_logfile_maxbytes=1MB
stderr_logfile_backups=10
stderr_capture_maxbytes=1MB
/var/local/sites/myproject/run/gunicorn_start:
#!/bin/bash
NAME="myproject_app" # Name of the application
USER=myproject # the user to run as
GROUP=myproject # the group to run as
NUM_WORKERS=3 # how many worker processes should Gunicorn spawn
# Logs config
LOG_LEVEL=info
ACCESS_LOGFILE=/var/local/sites/myproject/logs/gunicorn-myproject-access.log
ERROR_LOGFILE=/var/local/sites/myproject/logs/gunicorn-myproject-error.log
echo "Starting $NAME"
exec envdir /var/local/sites/myproject/env_vars /var/local/sites/myproject/venv/bin/gunicorn myproject.wsgi:application \
--name $NAME \
--workers $NUM_WORKERS \
--user=$USER --group=$GROUP \
--log-level=$LOG_LEVEL \
--bind=unix:/tmp/myproject.gunicorn.sock \
--access-logfile=$ACCESS_LOGFILE \
--error-logfile=$ERROR_LOGFILE
I think you should add directory to your supervisor configuration file. This is my template. I use this in every project and works fine:
[program:PROJECT_NAME]
command=/opt/sites/PROJECT_NAME/env/bin/gunicorn -c /opt/sites/etc/gunicorn/GUNICORN_CONF.conf.py PROJECT_NAME.wsgi:application
directory=/opt/sites/PROJECT_NAME
environment=PATH="/opt/sites/PROJECT/env/bin"
autostart=true
autorestart=false
redirect_stderr=True
stdout_logfile=/tmp/PROJECT_NAME.stdout
I have the same problem before. I'm using the following line in my gunicorn_start instead of envdir. I'm running a django application within a virtual env located in /env/nafd/ and my django app is located in /env/nafd/nafd_proj
..
DJANGODIR=/to/path/app_proj
cd $DJANGODIR
source ../bin/activate`
exec ../bin/gunicorn nafd_proj.wsgi:application \
--name $NAME \
--workers $NUM_WORKERS \
--user=$USER --group=$GROUP \
--log-level=$LOG_LEVEL \
--bind=unix:/tmp/myproject.gunicorn.sock \
--access-logfile=$ACCESS_LOGFILE \
--error-logfile=$ERROR_LOGFILE`
It's obvious, but it is worth to mention:
check if "supervisord" daemon is running (service supervisor status).
Here is the following setup that I have here, using a Flask app with WSGI (Gunicorn), controlled via Supervisor, and it is working perfectly.
Flask App
root#ilg40:~# ll /etc/tdm/flask/
total 1120
drwx------ 5 root root 4096 Jan 24 19:47 ./
drwx------ 3 root root 4096 Jan 23 00:20 ../
-r-------- 1 root root 1150 Aug 31 17:54 favicon.ico
drw------- 2 root root 4096 Jan 13 22:51 static/
-rw------- 1 root root 883381 Jan 23 20:09 tdm.log
-rwx------ 1 root root 73577 Jan 23 21:37 tdm.py*
-rw------- 1 root root 56445 Jan 23 21:37 tdm.pyc
drw------- 2 root root 4096 Jan 23 20:08 templates/
-rw-r--r-- 1 root root 493 Jan 23 22:42 wsgi.py
-rw-r--r-- 1 root root 720 Jan 23 22:42 wsgi.pyc
srwxrwxrwx 1 root root 0 Jan 24 19:47 wsgi.sock=
Supervisor Config File
root#ilg40:~# cat /etc/supervisor/conf.d/wsgi_flask.conf
[program:wsgi_flask]
command = gunicorn --preload --bind unix:/etc/tdm/flask/wsgi.sock --workers 4 --pythonpath /etc/tdm/flask wsgi
process_name = wsgi_flask
autostart = true
autorestart = true
stdout_logfile = /var/log/wsgi_flask/wsgi_flask.out.log
stderr_logfile = /var/log/wsgi_flask/wsgi_flask.err.log
Update Supervisord About The New Process
root#ilg40:~# supervisorctl update
wsgi_flask: added process group
Checking Process Status
root#ilg40:~# supervisorctl status wsgi_flask
wsgi_flask RUNNING pid 1129, uptime 0:29:12
Note: in the setup above, I'm not using virtualenv, which I believe that could require the configuration of directory variable for the process and also, to configure environment PATH for the command variable (adding env PATH="$PATH:/the/app/path" gunicorn...), since that gunicorn, flask, and so on, are only located inside of the virtualenv.