project wsgi not found when started automatically but not manually - django

I do not understand why I get an error about not finding my "project.wsgi" module when supervisor tries to start the app automatically (for example when the server is starting.)
2014-02-15 05:13:05 [1011] [INFO] Using worker: sync
2014-02-15 05:13:05 [1016] [INFO] Booting worker with pid: 1016
2014-02-15 05:13:05 [1016] [ERROR] Exception in worker process:
Traceback (most recent call last):
File "/var/local/sites/myproject/venv/local/lib/python2.7/site-packages/gunicorn/arbiter.py", line 495, in spawn_worker
worker.init_process()
File "/var/local/sites/myproject/venv/local/lib/python2.7/site-packages/gunicorn/workers/base.py", line 106, in init_process
self.wsgi = self.app.wsgi()
File "/var/local/sites/myproject/venv/local/lib/python2.7/site-packages/gunicorn/app/base.py", line 114, in wsgi
self.callable = self.load()
File "/var/local/sites/myproject/venv/local/lib/python2.7/site-packages/gunicorn/app/wsgiapp.py", line 62, in load
return self.load_wsgiapp()
File "/var/local/sites/myproject/venv/local/lib/python2.7/site-packages/gunicorn/app/wsgiapp.py", line 49, in load_wsgiapp
return util.import_app(self.app_uri)
File "/var/local/sites/myproject/venv/local/lib/python2.7/site-packages/gunicorn/util.py", line 354, in import_app
__import__(module)
ImportError: No module named myproject.wsgi
Whereas I do not get this error and it works fine when I manually do:
sudo supervisorctl start myapp
What is different?
Thanks
UPDATE:
supervisor conf file:
[program:myproject]
command=/var/local/sites/myproject/run/gunicorn_start ; Command to start app
user=myproject ; User to run as
autostart=true
autorestart=true
loglevel=info
redirect_stderr=false
stdout_logfile=/var/local/sites/myproject/logs/supervisor-myproject-stdout.log
stdout_logfile_maxbytes=1MB
stdout_logfile_backups=10
stdout_capture_maxbytes=1MB
stderr_logfile=/var/local/sites/myproject/logs/supervisor-myproject-stderr.log
stderr_logfile_maxbytes=1MB
stderr_logfile_backups=10
stderr_capture_maxbytes=1MB
/var/local/sites/myproject/run/gunicorn_start:
#!/bin/bash
NAME="myproject_app" # Name of the application
USER=myproject # the user to run as
GROUP=myproject # the group to run as
NUM_WORKERS=3 # how many worker processes should Gunicorn spawn
# Logs config
LOG_LEVEL=info
ACCESS_LOGFILE=/var/local/sites/myproject/logs/gunicorn-myproject-access.log
ERROR_LOGFILE=/var/local/sites/myproject/logs/gunicorn-myproject-error.log
echo "Starting $NAME"
exec envdir /var/local/sites/myproject/env_vars /var/local/sites/myproject/venv/bin/gunicorn myproject.wsgi:application \
--name $NAME \
--workers $NUM_WORKERS \
--user=$USER --group=$GROUP \
--log-level=$LOG_LEVEL \
--bind=unix:/tmp/myproject.gunicorn.sock \
--access-logfile=$ACCESS_LOGFILE \
--error-logfile=$ERROR_LOGFILE

I think you should add directory to your supervisor configuration file. This is my template. I use this in every project and works fine:
[program:PROJECT_NAME]
command=/opt/sites/PROJECT_NAME/env/bin/gunicorn -c /opt/sites/etc/gunicorn/GUNICORN_CONF.conf.py PROJECT_NAME.wsgi:application
directory=/opt/sites/PROJECT_NAME
environment=PATH="/opt/sites/PROJECT/env/bin"
autostart=true
autorestart=false
redirect_stderr=True
stdout_logfile=/tmp/PROJECT_NAME.stdout

I have the same problem before. I'm using the following line in my gunicorn_start instead of envdir. I'm running a django application within a virtual env located in /env/nafd/ and my django app is located in /env/nafd/nafd_proj
..
DJANGODIR=/to/path/app_proj
cd $DJANGODIR
source ../bin/activate`
exec ../bin/gunicorn nafd_proj.wsgi:application \
--name $NAME \
--workers $NUM_WORKERS \
--user=$USER --group=$GROUP \
--log-level=$LOG_LEVEL \
--bind=unix:/tmp/myproject.gunicorn.sock \
--access-logfile=$ACCESS_LOGFILE \
--error-logfile=$ERROR_LOGFILE`

It's obvious, but it is worth to mention:
check if "supervisord" daemon is running (service supervisor status).
Here is the following setup that I have here, using a Flask app with WSGI (Gunicorn), controlled via Supervisor, and it is working perfectly.
Flask App
root#ilg40:~# ll /etc/tdm/flask/
total 1120
drwx------ 5 root root 4096 Jan 24 19:47 ./
drwx------ 3 root root 4096 Jan 23 00:20 ../
-r-------- 1 root root 1150 Aug 31 17:54 favicon.ico
drw------- 2 root root 4096 Jan 13 22:51 static/
-rw------- 1 root root 883381 Jan 23 20:09 tdm.log
-rwx------ 1 root root 73577 Jan 23 21:37 tdm.py*
-rw------- 1 root root 56445 Jan 23 21:37 tdm.pyc
drw------- 2 root root 4096 Jan 23 20:08 templates/
-rw-r--r-- 1 root root 493 Jan 23 22:42 wsgi.py
-rw-r--r-- 1 root root 720 Jan 23 22:42 wsgi.pyc
srwxrwxrwx 1 root root 0 Jan 24 19:47 wsgi.sock=
Supervisor Config File
root#ilg40:~# cat /etc/supervisor/conf.d/wsgi_flask.conf
[program:wsgi_flask]
command = gunicorn --preload --bind unix:/etc/tdm/flask/wsgi.sock --workers 4 --pythonpath /etc/tdm/flask wsgi
process_name = wsgi_flask
autostart = true
autorestart = true
stdout_logfile = /var/log/wsgi_flask/wsgi_flask.out.log
stderr_logfile = /var/log/wsgi_flask/wsgi_flask.err.log
Update Supervisord About The New Process
root#ilg40:~# supervisorctl update
wsgi_flask: added process group
Checking Process Status
root#ilg40:~# supervisorctl status wsgi_flask
wsgi_flask RUNNING pid 1129, uptime 0:29:12
Note: in the setup above, I'm not using virtualenv, which I believe that could require the configuration of directory variable for the process and also, to configure environment PATH for the command variable (adding env PATH="$PATH:/the/app/path" gunicorn...), since that gunicorn, flask, and so on, are only located inside of the virtualenv.

Related

Celery/systemd not talking to my django app

I am trying to daemonize my celery/redis workers on Ubuntu 18.04 and I am making progress! Celery is now running, but it does appear to be communicating with my django app. I found that removing the directive Type=forking from the celery.service file, celery started working.
# systemctl status celery.service
● celery.service - Celery Service
Loaded: loaded (/etc/systemd/system/celery.service; enabled; vendor preset: enabled)
Active: active (running) since Thu 2020-12-17 18:35:19 MST; 1min 52s ago
Main PID: 21509 (code=exited, status=1/FAILURE)
Tasks: 0 (limit: 4915)
CGroup: /system.slice/celery.service
Dec 17 18:35:17 t-rex systemd[1]: Starting Celery Service...
Dec 17 18:35:19 t-rex sh[24331]: celery multi v4.3.0 (rhubarb)
Dec 17 18:35:19 t-rex sh[24331]: > Starting nodes...
Dec 17 18:35:19 t-rex sh[24331]: > w1#t-rex: OK
Dec 17 18:35:19 t-rex sh[24331]: > w2#t-rex: OK
Dec 17 18:35:19 t-rex sh[24331]: > w3#t-rex: OK
Dec 17 18:35:19 t-rex systemd[1]: Started Celery Service.
When I test celery from the python prompt in my apps virtualenv the test fails. This is the test I use in my app before I call a celery task.
>>> celery_app.control.broadcast('ping', reply=True, limit=1)
[]
My celery.service file (straight from the celery docs) with a few local changes.
[Unit]
Description=Celery Service
After=network.target redis.service
Requires=redis.service
[Service]
#Type=forking
User=www-data
Group=www-data
EnvironmentFile=/etc/conf.d/celery
WorkingDirectory=/home/mark/python-projects/archive
ExecStart=/bin/sh -c '${CELERY_BIN} -A $CELERY_APP multi start $CELERYD_NODES \
--pidfile=${CELERYD_PID_FILE} --logfile=${CELERYD_LOG_FILE} \
--loglevel="${CELERYD_LOG_LEVEL}" $CELERYD_OPTS'
ExecStop=/bin/sh -c '${CELERY_BIN} multi stopwait $CELERYD_NODES \
--pidfile=${CELERYD_PID_FILE} --loglevel="${CELERYD_LOG_LEVEL}"'
ExecReload=/bin/sh -c '${CELERY_BIN} -A $CELERY_APP multi restart $CELERYD_NODES \
--pidfile=${CELERYD_PID_FILE} --logfile=${CELERYD_LOG_FILE} \
--loglevel="${CELERYD_LOG_LEVEL}" $CELERYD_OPTS'
Restart=always
[Install]
WantedBy=multi-user.target
and my environment file (also from the same celery docs):
# Name of nodes to start
# here we have a single node
CELERYD_NODES="w1 w2 w3"
# or we could have three nodes:
#CELERYD_NODES="w1 w2 w3"
# Absolute or relative path to the 'celery' command:
CELERY_BIN="/home/mark/.virtualenvs/archive/bin/celery"
#CELERY_BIN="/virtualenvs/def/bin/celery"
# App instance to use
# comment out this line if you don't use an app
CELERY_APP="MemorabiliaJSON"
# or fully qualified:
#CELERY_APP="proj.tasks:app"
# How to call manage.py
CELERYD_MULTI="multi"
# Extra command-line arguments to the worker
CELERYD_OPTS="--time-limit=300 --concurrency=8"
# - %n will be replaced with the first part of the nodename.
# - %I will be replaced with the current child process index
# and is important when using the prefork pool to avoid race conditions.
CELERYD_PID_FILE="/var/run/celery/%n.pid"
CELERYD_LOG_FILE="/var/log/celery/%n%I.log"
CELERYD_LOG_LEVEL="DEBUG"
The redis server is running, so that not be the issue. I am not sure if redis is talking to my daemonized celery or not.
I start celery with "celery -A MemorabiliaJSON worker -l debug" when using django runserver, and I am not sure if my daemonized celery needs something else to make it talk to my django apps.
Is there any magic needed to get django/apache/wsgi to work with daemonized celery? There is nothing in the celery log files when I try my test above.
Thanks for any assistance you can give me in debugging this problem!
Mark

Celery - Permission Problem - Create folder

I use celery (jobs manager) on prod mode for a website (Django) on a centos7 server.
My problem is that in a celery task my function did not create folder (see my_function).
the function
def my_fucntion():
parent_folder = THE_PARENT_PATH
if not os.path.exists(centrifuge_recentrifuge_work_dir_path):
os.makedirs(centrifuge_recentrifuge_work_dir_path)
# The folder THE_PARENT_PATH is created
celery_task(parent_folder)
the celery task
#app.task(name='a task')
def celery_task(parent_folder):
import getpass; print("permission : ", getpass.getuser())
# permission : apache
path_1 = os.path.join(parent_folder, "toto")
if not os.path.exists(path_1):
os.makedirs(path_1)
# The folder path_1 is NOT created
..... some others instructions...
# Singularity image run (needed the path_1 folder)
I use Supervisord for daemonization of celery.
 celery.init
[program:sitecelery]
command=/etc/supervisord.d/celery.sh
directory=/mnt/site/
user=apache
numprocs=1
stdout_logfile=/var/log/celery/worker.log
stderr_logfile=/var/log/celery/worker.log
autostart=true
autorestart=true
priority=999
The folder path_1 is created when user=root but i want that it was not rot but apache user.
celery.sh
#!/bin/bash
cd /mnt/site/
exec ../myenv/bin/python3 -m celery -A site.celery_settings worker -l info --autoscale 20
sudo systemctl status supervisord
● supervisord.service - Process Monitoring and Control Daemon
Loaded: loaded (/usr/lib/systemd/system/supervisord.service; disabled; vendor preset: disabled)
Active: active (running) since lun. 2018-10-15 09:09:05 CEST; 4min 59s ago
Process: 61477 ExecStart=/usr/bin/supervisord -c /etc/supervisord.conf (code=exited, status=0/SUCCESS)
Main PID: 61480 (supervisord)
CGroup: /system.slice/supervisord.service
├─61480 /usr/bin/python /usr/bin/supervisord -c /etc/supervisord.conf
└─61491 ../myenv/bin/python3 -m celery -A Site_CNR.celery_settings worker -l info --autoscale 20
oct. 15 09:09:05 web01 systemd[1]: Starting Process Monitoring and Control Daemon...
oct. 15 09:09:05 web01 systemd[1]: Started Process Monitoring and Control Daemon.
oct. 15 09:09:17 web01 Singularity[61669]: action-suid (U=48,P=61669)> Home directory is not owned by calling user: /usr/share/httpd
oct. 15 09:09:17 web01 Singularity[61669]: action-suid (U=48,P=61669)> Retval = 255
oct. 15 09:09:17 web01 Singularity[61678]: action-suid (U=48,P=61678)> Home directory is not owned by calling user: /usr/share/httpd
oct. 15 09:09:17 web01 Singularity[61678]: action-suid (U=48,P=61678)> Retval = 255
EDIT 1 os.makedirs
In the celery tasks :
if not os.path.exists(path_1):
print("test")
# test
print(os.makedirs(path_1))
# None
os.makedirs(path_1)
The os.makedirs return None :/
I dont know why but with this correction on a post error of this problem with a sudo chown -R apache:apache /usr/share/httpd/ resolve this problem oO

uwsgi cannot process request longer than one minute

I have a Django application that is served within docker container via uwsgi. I have prepared a custom view just to reproduce the issue I'm mentioning. It looks exactly like below:
def get(self, request):
logger = logging.getLogger('ReleaseReport')
logger.critical('Entering and sleeping')
time.sleep(180)
logger.critical('Awaking')
return Response({'response': 'anything'})
The only thing it does (intentionally) is to log message, sleep for 3 minutes, and log another message afterwards.
Here is how the log file works after I try to visit the view from Firefox / Chrome / PyCharm's rest API client:
spawned uWSGI worker 9 (pid: 14, cores: 1)
spawned uWSGI worker 10 (pid: 15, cores: 1)
spawned uWSGI http 1 (pid: 16)
CRITICAL 2018-08-31 12:10:37,658 views Entering and sleeping
CRITICAL 2018-08-31 12:11:37,742 views Entering and sleeping
CRITICAL 2018-08-31 12:11:38,687 views Awaking
[pid: 10|app: 0|req: 1/1] 10.187.133.2 () {36 vars in 593 bytes} [Fri Aug 31 12:10:37 2018] GET /api/version/ => generated 5156 bytes in 61229 msecs (HTTP/1.1 200) 4 headers in 137 bytes (1 switches on core 0)
CRITICAL 2018-08-31 12:12:37,752 views Entering and sleeping
CRITICAL 2018-08-31 12:12:38,784 views Awaking
[pid: 15|app: 0|req: 1/2] 10.187.133.2 () {36 vars in 593 bytes} [Fri Aug 31 12:11:37 2018] GET /api/version/ => generated 5156 bytes in 61182 msecs (HTTP/1.1 200) 4 headers in 137 bytes (1 switches on core 0)
CRITICAL 2018-08-31 12:13:38,020 views Entering and sleeping
CRITICAL 2018-08-31 12:13:38,776 views Awaking
[pid: 10|app: 0|req: 2/3] 10.187.133.2 () {36 vars in 593 bytes} [Fri Aug 31 12:12:37 2018] GET /api/version/ => generated 5156 bytes in 61034 msecs (HTTP/1.1 200) 4 headers in 137 bytes (1 switches on core 0)
After one minute, the view seems to be executed again, and after another minute the third time is executed. Moreover, the log says it is HTTP 200, but the client doesn't receive the data and just says it cannot load it after few more minutes (depends on client). However the first HTTP 200 in log occurs much earlier before client gives up.
Any clues what may be causing that issue? Here is my uwsgi.ini:
[uwsgi]
http = 0.0.0.0:8000
chdir = /app
module = web_server.wsgi:application
pythonpath = /app
static-map = /static=/app/static
master = true
processes = 10
vacuum = true
The Dockerfile command is as following:
/usr/local/bin/uwsgi --ini /app/uwsgi.ini
In my real application, it causes that client thinks the request failed, but since it was actually executed and finished 3 times, there are 3 records in a database. Changing worker processes number to 1 doesn't change much. Instead of waitin one minute to spawn view again, it is spawned after previous one finishes.
What's wrong with my configuration?
EDIT:
I have changed my view a bit, it now accepts sleep time parameter and looks like this:
def get(self, request, minutes=None):
minutes = int(minutes)
original_minutes = minutes
logger = logging.getLogger(__name__)
while minutes > 0:
logger.critical(f'Sleeping, {minutes} more minutes...')
time.sleep(60)
minutes -= 1
logger.critical(f'Slept for {original_minutes} minutes...')
return Response({'slept_for': original_minutes})
Now, curling:
> curl http://test-host/api/test/0
{"slept_for":0}
> curl http://test-host/api/test/1
{"slept_for":1}
> curl http://test-host/api/test/2
curl: (52) Empty reply from server
In log:
CRITICAL 2018-08-31 14:23:36,200 views Slept for 0 minutes...
[pid: 10|app: 0|req: 1/14] 10.160.43.172 () {28 vars in 324 bytes} [Fri Aug 31 14:23:35 2018] GET /api/test/0 => generated 15 bytes in 265 msecs (HTTP/1.1 200) 4 headers in 129 bytes (1 switches on core 0)
CRITICAL 2018-08-31 14:23:42,878 views Slept for 0 minutes...
[pid: 10|app: 0|req: 2/15] 10.160.43.172 () {28 vars in 324 bytes} [Fri Aug 31 14:23:42 2018] GET /api/test/0 => generated 15 bytes in 1 msecs (HTTP/1.1 200) 4 headers in 129 bytes (1 switches on core 0)
CRITICAL 2018-08-31 14:23:46,370 views Sleeping, 1 more minutes...
CRITICAL 2018-08-31 14:24:46,380 views Slept for 1 minutes...
[pid: 10|app: 0|req: 3/16] 10.160.43.172 () {28 vars in 324 bytes} [Fri Aug 31 14:23:46 2018] GET /api/test/1 => generated 15 bytes in 60011 msecs (HTTP/1.1 200) 4 headers in 129 bytes (1 switches on core 0)
CRITICAL 2018-08-31 14:27:06,903 views Sleeping, 2 more minutes...
CRITICAL 2018-08-31 14:28:06,963 views Sleeping, 1 more minutes...
CRITICAL 2018-08-31 14:29:06,995 views Slept for 2 minutes...
[pid: 9|app: 0|req: 1/17] 10.160.43.172 () {28 vars in 324 bytes} [Fri Aug 31 14:27:06 2018] GET /api/test/2 => generated 15 bytes in 120225 msecs (HTTP/1.1 200) 4 headers in 129 bytes (1 switches on core 0)
If I use the same command to test server running with manage.py runserver, it does answer every time - no matter if I sleep 2 minutes or 10. So it is not the client fault.
I've changed harakiri to 3600, no change.
EDIT2 (my Dockerfile):
FROM python:3.7.0-alpine
ADD . /app
RUN set -ex \
&& apk add mysql-dev \
pcre-dev \
&& apk add --no-cache --virtual .build-deps \
gcc \
make \
libc-dev \
musl-dev \
linux-headers \
libffi-dev \
&& pip install --no-cache-dir -r /app/requirements.txt \
&& runDeps="$( \
scanelf --needed --nobanner --recursive /venv \
| awk '{ gsub(/,/, "\nso:", $2); print "so:" $2 }' \
| sort -u \
| xargs -r apk info --installed \
| sort -u \
)" \
&& apk add --virtual .python-rundeps $runDeps \
&& apk del .build-deps
WORKDIR /app
RUN mkdir -p static
RUN python manage.py collectstatic --clear --noinput
EXPOSE 8000
CMD ["/usr/local/bin/uwsgi", "--ini", "/app/uwsgi.ini"]
It actually was the Dockerfile issue. Previously I had uWSGI in my requirements.txt, so it was installed by pip install.
When I've removed it and added uwsgi-python3 to apk add, now everything is fine.
No idea why it matters (everything else was working fine), but it solved my issue.

Upstart Gunicorn doesn't working

Hi I am a south korean student :)
I am studing staging, production test using nginx, gunicorn
first I want run gunicorn using socket
gunicorn --bind unix:/tmp/tddtest.com.socket testlists.wsgi:applicaion
and It shows
[2016-06-26 05:33:42 +0000] [27861] [INFO] Starting gunicorn 19.6.0
[2016-06-26 05:33:42 +0000] [27861] [INFO] Listening at: unix:/tmp/tddgoat1.amull.net.socket (27861)
[2016-06-26 05:33:42 +0000] [27861] [INFO] Using worker: sync
[2016-06-26 05:33:42 +0000] [27893] [INFO] Booting worker with pid: 27893
and I running function test in local repository
python manage.py test func_test
and I was working!
Creating test database for alias 'default'...
..
----------------------------------------------------------------------
Ran 2 tests in 9.062s
OK
Destroying test database for alias 'default'...
and I want auto start gunicorn when I boot server
So I decide to using Upstart(in ubuntu)
In /etc/init/tddtest.com.conf
description "Gunicorn server for tddtest.com"
start on net-device-up
stop on shutdown
respawn
setuid elspeth
chdir /home/elspeth/sites/tddtest.com/source/TDD_Test/testlists/testlists
exec gunicorn --bind \ unix:/tmp/tdd.com.socket testlists.wsgi:application
(path of wsgi.py is)
/sites/tddtest.com/source/TDD_Test/testlists/testlists
and I command
sudo start tddtest.com
It shows
tddtest.com start/running, process 27905
I think it is working
but I running function test in local repository
python manage.py test func_test
but it show
======================================================================
FAIL: test_can_start_a_list_and_retrieve_it_later (functional_tests.tests.NewVisitorTest)
----------------------------------------------------------------------
Traceback (most recent call last):
File "/Users/hanminsoo/Documents/TDD_test/TDD_Test/superlists/functional_tests/tests.py", line 38, in test_can_start_a_list_and_retrieve_it_later
self.assertIn('To-Do', self.browser.title)
AssertionError: 'To-Do' not found in 'Error'
----------------------------------------------------------------------
Ran 2 tests in 4.738s
THE GUNICORN IS NOT WORKING ㅠ_ㅠ
I want look process
ps aux
but I can't found gunicorn process
[...]
ubuntu 24387 0.0 0.1 105636 1700 ? S 02:51 0:00 sshd: ubuntu#pts/0
ubuntu 24391 0.0 0.3 21284 3748 pts/0 Ss 02:51 0:00 -bash
root 24411 0.0 0.1 63244 1800 pts/0 S 02:51 0:00 su - elspeth
elspeth 24412 0.0 0.4 21600 4208 pts/0 S 02:51 0:00 -su
root 26860 0.0 0.0 31088 960 ? Ss 04:45 0:00 nginx: master process /usr/local/nginx/sbin/nginx -c /usr/local/nginx/conf/nginx.conf
nobody 26863 0.0 0.1 31524 1872 ? S 04:45 0:00 nginx: worker process
elspeth 28005 0.0 0.1 17160 1292 pts/0 R+ 05:55 0:00 ps aux
I can't found problem...
please somebody help me thankyou :)
Please modify your upstart script as follows:
exec /home/elspeth/.pyenv/versions/3.5.1/envs/sites/bin/gunicorn --bind \ unix:/tmp/tdd.com.socket testlists.wsgi:application
If that does not work it could very well be because the /home/elspeth/.pyenv/ folder is inaccessible please check it's permission. If permissions are found to be correct and you are continuing to have problems try this:
script
cd /home/elspeth/sites/tddtest.com/source/TDD_Test/testlists/testlists
/home/elspeth/.pyenv/versions/3.5.1/envs/sites/bin/gunicorn --bind \ unix:/tmp/tdd.com.socket testlists.wsgi:application
end script

supervisor, gunicorn and django only work when logged in

I'm using this guide to setting up an intranet server. Everything goes ok, the server works and I can checked it is working in my network.
But when I logout, I get 404 error.
The sock file is in the path indicated in gunicorn_start.
(cmi2014)javier#sgc:~/workspace/cmi/cmi$ ls -l run/
total 0
srwxrwxrwx 1 javier javier 0 mar 10 17:31 cmi.sock
Actually I can se the workers when listed the process list.
(cmi2014)javier#sgc:~/workspace/cmi/cmi$ ps aux | grep cmi
javier 17354 0.0 0.2 14652 8124 ? S 17:27 0:00 gunicorn: master [cmi]
javier 17365 0.0 0.3 18112 10236 ? S 17:27 0:00 gunicorn: worker [cmi]
javier 17366 0.0 0.3 18120 10240 ? S 17:27 0:00 gunicorn: worker [cmi]
javier 17367 0.0 0.5 36592 17496 ? S 17:27 0:00 gunicorn: worker [cmi]
javier 17787 0.0 0.0 4408 828 pts/0 S+ 17:55 0:00 grep --color=auto cmi
And supervisorctl responds that the process is running:
(cmi2014)javier#sgc:~/workspace/cmi/cmi$ sudo supervisorctl status cmi
[sudo] password for javier:
cmi RUNNING pid 17354, uptime 0:29:21
There is an error in nginx logs,
(cmi2014)javier#sgc:~/workspace/cmi/cmi$ tail logs/nginx-error.log
2014/03/10 17:38:57 [error] 17299#0: *19 connect() to
unix:/home/javier/workspace/cmi/cmi/run/cmi.sock failed (111: Connection refused) while
connecting to upstream, client: 10.69.0.174, server: , request: "GET / HTTP/1.1",
upstream: "http://unix:/home/javier/workspace/cmi/cmi/run/cmi.sock:/", host:
"10.69.0.68:2014"
Again, the error appears only when I logout or close session, but everything works fine when run or reload supervisor and stay connected.
By the way, ngnix, supervisor and gunicorn run on my uid.
Thanks in advance.
Edit Supervisor conf
[program:cmi]
command = /home/javier/entornos/cmi2014/bin/cmi_start
user = javier
stdout_logfile = /home/javier/workspace/cmi/cmi/logs/cmi_supervisor.log
redirect_stderr = true
autostart=true
autorestart=true
Gnunicor start script
#!/bin/bash
NAME="cmi" # Name of the application
DJANGODIR=/home/javier/workspace/cmi/cmi # Django project directory
SOCKFILE=/home/javier/workspace/cmi/cmi/run/cmi.sock # we will communicte using this unix socket
USER=javier # the user to run as
GROUP=javier # the group to run as
NUM_WORKERS=3 # how many worker processes should Gunicorn spawn
DJANGO_SETTINGS_MODULE=cmi.settings # which settings file should Django use
DJANGO_WSGI_MODULE=cmi.wsgi # WSGI module name
echo "Starting $NAME as `whoami`"
# Activate the virtual environment
cd $DJANGODIR
source /home/javier/entornos/cmi2014/bin/activate
export DJANGO_SETTINGS_MODULE=$DJANGO_SETTINGS_MODULE
export PYTHONPATH=$DJANGODIR:$PYTHONPATH
export CMI_SECRET_KEY='***'
export CMI_DATABASE_HOST='***'
export CMI_DATABASE_NAME='***'
export CMI_DATABASE_USER='***'
export CMI_DATABASE_PASS='***'
export CMI_DATABASE_PORT='3306'
# Create the run directory if it doesn't exist
RUNDIR=$(dirname $SOCKFILE)
test -d $RUNDIR || mkdir -p $RUNDIR
# Start your Django Unicorn
# Programs meant to be run under supervisor should not daemonize themselves (do not use --daemon)
exec /home/javier/entornos/cmi2014/bin/gunicorn ${DJANGO_WSGI_MODULE}:application --name $NAME --workers $NUM_WORKERS --user=$USER --group=$GROUP --log-level=debug --bind=unix:$SOCKFILE