For my current flask deployment, I had to set up a uwsgi server.
This is how I have created the uwsgi daemon:
sudo vim /etc/init/uwsgi.conf
# file: /etc/init/uwsgi.conf
description "uWSGI server"
start on runlevel [2345]
stop on runlevel [!2345]
respawn
exec /myproject/myproject-env/bin/uwsgi --uid www-data --gid www-data --home /myproject/myproject-env/site/F11/Engineering/ --socket /tmp/uwsgi.sock --chmod-socket --module F11 --callable app --pythonpath /myproject/myproject-env/site/F11/Engineering/ -H /myproject/myproject-env
However after running this successfully: sudo start uwsgi
uwsgi start/running, process 1286
And trying to access the application via browser:
I get a 502 Bad Gateway
and an error entry in nginx error.log:
2013/06/13 23:47:28 [error] 743#0: *296 upstream prematurely closed
connection while reading response header from upstream, client:
xx.161.xx.228, server: myproject.com, request: "GET /show_records/2013/6 HTTP/1.1", upstream:
"uwsgi://unix:///tmp/uwsgi.sock:", host: "myproject.com"
But the sock file has the permission it needs:
srw-rw-rw- 1 www-data www-data 0 Jun 13 23:46 /tmp/uwsgi.sock
If I run the exec command from above in the command line as a process, it works perfectly fine. Why is the daemon not working correctly please?
btw Nginx is running as
vim /etc/nginx/nginx.conf
user www-data;
and vim /etc/nginx/sites-available/default
location / {
uwsgi_pass unix:///tmp/uwsgi.sock;
include uwsgi_params;
}
and it is started as sudo service nginx start
I am running this on Ubuntu 12.04 LTS.
I hope I have provided all the necessary data, hope someone can guide me into the right direction. Thanks.
Finally I have solved this problem after working nearly 2 days on it. I hope this solution will help other flask/uwsgi users that are experiencing a similar problem.
I had two major issues that caused this.
1) The best way to find the problems with a daemon is obviously a log file and a cleaner structure.
sudo vim /etc/init/uwsgi.conf
Change the daemon script to the following:
# file: /etc/init/uwsgi.conf
description "uWSGI server"
start on runlevel [2345]
stop on runlevel [!2345]
respawn
exec /home/ubuntu/uwsgi-1.9.12/uwsgi -c /myproject/uwsgi.ini
vim /myproject/uwsgi.ini
[uwsgi]
socket = /tmp/uwsgi.sock
master = true
enable-threads = true
processes = 5
chdir= /myproject/F11/Engineering
module=F11:app
virtualenv = /myproject/myproject-env/
uid = www-data
gid = www-data
logto = /myproject/error.log
This is much cleaner way of setting up the daemon. Also notice the last line how to setup the log file. Initially I had set the log file to /var/log/uwsgi/error.log. After a lot of sweat and tears I realized the daemon is running as www-data and hence can not access the /var/log/uwsgi/error.log since the error.log was owned by root:root. This made the uwsgi fail silently.
I found it much more efficient to just point the log file to my own /myproject, where the daemon has guaranteed access as www-data. And also don't forget to make the whole project accessible to www-data or the daemon will fail with an Internal Server error message. -->
sudo chown www-data:www-data -R /myproject/
Restart uwsgi daemon:
sudo service uwsgi restart
2) Now you have three log files to lookout for:
tail -f /var/log/upstart/uwsgi.log --> Shows problems with your daemon upon start
tail -f /var/log/nginx/error.log --> Shows permission problems when wsgi access is refused, often because /tmp/uwsgi.sock file is owned by root instead of www-data. In that case simply delete the sock file sudo rm /tmp/uwsgi.sock
tail -f /myproject/error.log --> Shows errors thrown by uwsgi in your application
This combination of log files helped me to figure out that I also had a bad import with Flask-Babel in my Flask application. Bad in that sense, that the way I utilized the library was falling back to the system's locale to determine the datetime format.
File "/myproject/F11/Engineering/f11_app/templates/show_records.html", line 25, in block "body"
<td>{{ record.record_date|format_date }}</td>
File "./f11_app/filters.py", line 7, in format_date
day = babel_dates.format_date(value, "EE")
File "/myproject/myproject-env/local/lib/python2.7/site-packages/babel/dates.py", line 459, in format_date
return pattern.apply(date, locale)
File "/myproject/myproject-env/local/lib/python2.7/site-packages/babel/dates.py", line 702, in apply
return self % DateTimeFormat(datetime, locale)
File "/myproject/myproject-env/local/lib/python2.7/site-packages/babel/dates.py", line 699, in __mod__
return self.format % other
File "/myproject/myproject-env/local/lib/python2.7/site-packages/babel/dates.py", line 734, in __getitem__
return self.format_weekday(char, num)
File "/myproject/myproject-env/local/lib/python2.7/site-packages/babel/dates.py", line 821, in format_weekday
return get_day_names(width, context, self.locale)[weekday]
File "/myproject/myproject-env/local/lib/python2.7/site-packages/babel/dates.py", line 69, in get_day_names
return Locale.parse(locale).days[context][width]
AttributeError: 'NoneType' object has no attribute 'days'
This is the way I was using the Flask filter:
import babel.dates as babel_dates
#app.template_filter('format_date')
def format_date(value):
day = babel_dates.format_date(value, "EE")
return '{0} {1}'.format(day.upper(), affix(value.day))
The strangest part is that this code is working perfectly fine within the dev environment (!). It works even fine when running the uwsgi as a root process from the command line. But it fails when ran by the www-data daemon. This must have something to do with how the locale is set, which Flask-Babel is trying to fall back to.
When I changed the import like this, it all worked finally with the daemon:
from flask.ext.babel import format_date
#app.template_filter('format_date1')
def format_date1(value):
day = format_date(value, "EE")
return '{0} {1}'.format(day.upper(), affix(value.day))
Hence be careful when using Eclipse/Aptana Studio that is trying to pick the right namespace for your classes in code. It can really turn ugly.
It is now working perfectly fine as a uwsgi daemon on an Amazon Ec2 (Ubuntu 12.04) since 2 days. I hope this experience helps fellow python developers.
I gave up, there was no uwsgi.log generated, and nginx just kept complaining about :
2014/03/06 01:06:28 [error] 23175#0: *22 upstream prematurely closed connection while reading response header from upstream, client: client.IP, server: my.server.IP, request: "GET / HTTP/1.1", upstream: "uwsgi://unix:/var/web/the_gelatospot/uwsgi.sock:", host: "host.ip"
for every single request. This only happened if running uwsgi as a service, as a process it started fine under any user. So this would work from command line (page responds):
$exec /var/web/the_gelatospot/mez_server.sh
This didn't (/etc/init/site_service.conf):
description "mez sites virtualenv and uwsgi_django" start on runlevel
[2345] stop on runlevel [06] respawn respawn limit 10 5 exec
/var/web/the_gelatospot/mez_server.sh
The process would start but on each request nginx would complain about the closed connection. Strangely I have this same config working just fine for 2 other apps using the same nginx version and same uwsgi version as well as both apps are mezzanine CMS apps. I tried everything I could think of and what was suggested. In the end I switched to gunicorn which works fine:
#!/bin/bash
NAME="the_gelatospot" # Name of the application
DJANGODIR=/var/web/the_gelatospot # Django project directory
SOCKFILE=/var/web/the_gelatospot/gunicorn.sock # we will communicte using this unix socket
USER=ec2-user
GROUP=ec2-user # the user to run as, the group to run as
NUM_WORKERS=3 # how many worker processes should Gunicorn spawn
DJANGO_SETTINGS_MODULE=settings
#DJANGO_SETTINGS_MODULE=the_gelatospot.settings # which settings file should Django use
#DJANGO_WSGI_MODULE=the_gelatospot.wsgi # WSGI module name
DJANGO_WSGI_MODULE=wsgi
echo "Starting $NAME as `the_gelatospot`"
# Activate the virtual environment
cd $DJANGODIR
source ../mez_env/bin/activate
export DJANGO_SETTINGS_MODULE=$DJANGO_SETTINGS_MODULE
export PYTHONPATH=$DJANGODIR:$PYTHONPATH
# Create the run directory if it doesn't exist
RUNDIR=$(dirname $SOCKFILE)
test -d $RUNDIR || mkdir -p $RUNDIR
cd ..
# Start your Django GUnicorn
# Programs meant to be run under supervisor should not daemonize themselves (do not use --daemon)
exec gunicorn -k eventlet ${DJANGO_WSGI_MODULE}:application \
--name $NAME \
--workers $NUM_WORKERS \
--log-level=debug \
--bind=unix:$SOCKFILE
And here is the one that wouldn't work as a service (nginx complaining of prematurely closed connection and no app log data coming through).
#!/bin/bash
DJANGODIR=/var/web/the_gelatospot/ # Django project directory
cd $DJANGODIR
source ../mez_env/bin/activate
uwsgi --ini uwsgi.ini
And the uwsgi.ini:
[uwsgi]
uid = 222
gid = 500
socket = /var/web/the_gelatospot/uwsgi.sock
virtualenv = /var/web/mez_env
chdir = /var/web/the_gelatospot/
wsgi-file = /var/web/the_gelatospot/wsgi.py
pythonpath = ..
env = DJANGO_SETTINGS_MODULE=the_gelatospot.settings
die-on-term = true
master = true
chmod-socket = 666
;experiment using uwsgitop
worker = 1
;gevent = 100
processes = 1
daemonize = /var/log/nginx/uwsgi.log
logto = /var/log/nginx/uwsgi.logi
log-maxsize = 10000000
enable-threads = true
I went from gunicorn to uWSGI last year and I had no issues with it till now, it also seemed a bit faster than gunicorn. Right now I'm thinking of sticking to gunicorn. Its getting better, puts up much nicer numbers with eventlet installed, and it is easier to configure.
Hope this workaround helps. I would still like to know the issue with uWSGI and nginx but I'm stumped.
Update:
So using gunicorn allowed me to run the server as a service and while toying in mezzanine I ran into this error: FileSystemEncodingChanged
To fix this I found the solution here:
https://groups.google.com/forum/#!msg/mezzanine-users/bdln_Y99zQw/9HrhNSKFyZsJ
And I had to modify it a bit since I don't use supervisord, I only use upstart and a shell script. I added this to right before I execute gunicorn in my mez_server.sh file:
export LANG=en_US.UTF-8, LC_ALL=en_US.UTF-8, LC_LANG=en_US.UTF-8
exec gunicorn -k eventlet ${DJANGO_WSGI_MODULE}:application \
--name $NAME \
--workers $NUM_WORKERS \
--log-level=debug \
--bind=unix:$SOCKFILE
I also tried this fix using uWSGI like so ( a lot shorter since using uwsgi.ini ):
export LANG=en_US.UTF-8, LC_ALL=en_US.UTF-8, LC_LANG=en_US.UTF-8
exec uwsgi --ini uwsgi.ini
I am still sticking to gunicorn since it still worked with the problem and lead me in the right direction to resolve it. I was very disappointed that uWSGI provided no output in the log file even with these params, I only seen the server start process and that's it:
daemonize = /var/log/nginx/uwsgi.log
logto = /var/log/nginx/uwsgi.logi
While nginx kept throwing the disconnect error, uWSGI sat there like nothing is happening.
As a single line running with daemon true command is
gunicorn app.wsgi:application -b 127.0.0.1:8000 --daemon
bind your app with 127.0.0.1:8000 and --deamon force it to run as daemon
but define a gunicorn_config.cfg file and run with -c flag is good practice
for more:
https://gunicorn-docs.readthedocs.org/en/develop/configure.html
Related
I have a Django app which runs on Gunicorn, and is managed by SupervisorD which is managed by Ansible.
I want Django to read the DJANGO_SECRET_KEY variable from the environment, since I don't want to store my secret key in a config file or VCS. For that I read the key from the environment in my settings.py:
SECRET_KEY = os.environ['DJANGO_SECRET_KEY']
Looking at Supervisor docs it says:
Note that the subprocess will inherit the environment variables of the shell used to start “supervisord” except for the ones overridden here. See Subprocess Environment.
Here's my supervisor.conf:
[program:gunicorn]
command=/.../.virtualenvs/homepage/bin/gunicorn homepage.wsgi -w 1 --bind localhost:8001 --pid /tmp/gunicorn.pid
directory=/.../http/homepage
When I set the variable and run Gunicorn command from the shell, it starts up just fine:
$ DJANGO_SECRET_KEY=XXX /.../.virtualenvs/homepage/bin/gunicorn homepage.wsgi -w 1 --bind localhost:8001 --pid /tmp/gunicorn.pid
However when I set the variable in the shell and restart the Supervisor service my app fails to start with error about not found variable:
$ DJANGO_SECRET_KEY=XXX supervisorctl restart gunicorn
gunicorn: ERROR (not running)
gunicorn: ERROR (spawn error)
Looking at Supervisor error log:
File "/.../http/homepage/homepage/settings.py", line 21, in <module>
SECRET_KEY = os.environ['DJANGO_SECRET_KEY']
File "/.../.virtualenvs/homepage/lib/python2.7/UserDict.py", line 40, in __getitem__
raise KeyError(key)
KeyError: 'DJANGO_SECRET_KEY'
[2017-08-27 08:22:09 +0000] [19353] [INFO] Worker exiting (pid: 19353)
[2017-08-27 08:22:09 +0000] [19349] [INFO] Shutting down: Master
[2017-08-27 08:22:09 +0000] [19349] [INFO] Reason: Worker failed to boot.
I have also tried restarting the supervisor service, but same error occurs:
$ DJANGO_SECRET_KEY=XXX systemctl restart supervisor
...
INFO exited: gunicorn (exit status 3; not expected)
My question is how do I make Supervisor to "pass" environment variables to it's child processes?
Create executable file similar to this and try to start it manually.
i.e create file and copy script below /home/user/start_django.sh
You need to fill in DJANGODIR and make other adjustments according to your case. also, you may need to adjust permissions accordingly.
#!/bin/bash
DJANGODIR=/.../.../..
ENVBIN=/.../.virtualenvs/homepage/bin/bin
# Activate the virtual environment
cd $DJANGODIR
source $ENVBIN/activate
DJANGO_SECRET_KEY=XXX
#define other env variables if you need
# Start your Django
exec gunicorn homepage.wsgi -w 1 --bind localhost:8001 --pid /tmp/gunicorn.pid
If it starts manually then just use this file in your conf.
[program:django_project]
command = /home/user/start_django.sh
user = {your user}
stdout_logfile = /var/log/django.log
redirect_stderr = true
# you can also try to define enviroment variables in this conf
environment=LANG=en_US.UTF-8,LC_ALL=en_US.UTF-8,DJANGO_SECRET_KEY=XXX
references that might be helpful,
Ref 1 - environment-variables
Ref 2
Ref 3
OK figured it out myself. Turns out ansible has feature called Vault, which is used for exactly this kind of jobs - encrypting keys.
Now I added the vaulted secret key to ansible's host_vars, see:
Vault: single encrypted variable and Inventory: Splitting out host and group specific data.
I added a task to my ansible playbook to copy the key file from ansible vault to the server:
- name: copy django secret key to server
copy: content="{{ django_secret_key }}" dest=/.../http/homepage/deploy/django_secret_key.txt mode=0600
And made Django read the secret from that file:
with open(os.path.join(BASE_DIR, 'deploy', 'django_secret_key.txt')) as secret_key_file:
SECRET_KEY = secret_key_file.read().strip()
If anyone has a simpler/better solution, please post it and I will accept it as the answer.
I am running a Django project and I want Gunicorn and Nginx to serve this project.
I am using Gunicorn 19.5.
The problem is basically that my socket file doesn't get created and I tried everything (chmod 777, chown, ...) and nothing is working.
Here is my bash script to start gunicorn:
#!/bin/bash
NAME="hello_project" # Name of the application
DJANGODIR=/home/ubuntu/git/hello_project # Django project directory
SOCKFILE=/home/ubuntu/git/hello_project/run/gunicorn.sock # we will communicte using this unix socket
LOG_FILE=/home/ubuntu/git/hello_project/logs/gunicorn.log
USER=ubuntu # the user to run as
GROUP=ubuntu # the group to run as
NUM_WORKERS=3 # how many worker processes should Gunicorn spawn
MAX_REQUESTS=1 # reload the application server for each request
DJANGO_SETTINGS_MODULE=hello_project.settings # which settings file should Django use
DJANGO_WSGI_MODULE=hello_project.wsgi # WSGI module name
echo "Starting $NAME as `whoami`"
# Activate the virtual environment
cd $DJANGODIR
source ~/.virtualenvs/hello_project/bin/activate
export DJANGO_SETTINGS_MODULE=$DJANGO_SETTINGS_MODULE
export PYTHONPATH=$DJANGODIR:$PYTHONPATH
# Create the run directory if it doesn’t exist
RUNDIR=$(dirname $SOCKFILE)
test -d $RUNDIR || mkdir -p $RUNDIR
# Start your Django Unicorn
# Programs meant to be run under supervisor should not daemonize themselves (do not use –daemon)
exec ~/.virtualenvs/hello_project/bin/gunicorn ${DJANGO_WSGI_MODULE}:application \
–name $NAME \
–workers $NUM_WORKERS \
–max-requests $MAX_REQUESTS \
–user=$USER –group=$GROUP \
–bind=unix:$SOCKFILE \
–log-level=debug \
–log-file=$LOG_FILE
I am getting crazy here, there is no log file where I can look at, I even tried to create the socket file myself with no success. I suspect it's a permission problem but even when I run the script with sudo, it doesn't work.
My directory "run" gets created though.
Here is the nginx error from the error.log file:
2016/05/19 16:56:32 [crit] 11053#0: *28 connect() to unix:/home/ubuntu/git/hello_project/run/gunicorn.sock failed (2: No such file or directory) while connecting to upstream...
Any clue ?
Thanks a lot,
Try running the script without doing exec ~/.virtualenvs/hello_project/bin/gunicorn.
Instead try running it with gunicorn then the args.
This is assuming you have permission to install gunicorn on the system without a virtualenv.
I have a gist that starts the same process you're trying to run. (Django, Nginx, Gunicorn)
https://gist.github.com/marcusshepp/129c822e2065e20122d8
I see you're using supervisor so you're going to not want to use the --deamon option that I use in my script.
I'm having a problem which is driving me nuts. I'm trying to create an Ubuntu 15.04 Upstart job, which would start uWSGI server (uWSGI version 2.0.7-debian) running a Django application.
I've configured the service as a follows and try to start the job by issuing command $ sudo service uwsgi start. There is no output in log files, no socket file created and no error.
# file: /etc/init/uwsgi.conf
description "uWSGI server"
start on runlevel [2345]
stop on runlevel [!2345]
exec /usr/bin/uwsgi --ini /srv/configs/uwsgi.ini
service uwsgi status says
● uwsgi.service - LSB: Start/stop uWSGI server instance(s)
Loaded: loaded (/etc/init.d/uwsgi)
Active: active (exited) since Tue 2015-05-05 09:40:08 EEST; 1s ago
Docs: man:systemd-sysv-generator(8)
Process: 21405 ExecStop=/etc/init.d/uwsgi stop (code=exited, status=0/SUCCESS)
Process: 21460 ExecStart=/etc/init.d/uwsgi start (code=exited, status=0/SUCCESS)
May 05 09:40:08 web-2 systemd[1]: Starting LSB: Start/stop uWSGI server instance(s)...
May 05 09:40:08 web-2 uwsgi[21460]: * Starting app server(s) uwsgi
May 05 09:40:08 web-2 uwsgi[21460]: ...done.
May 05 09:40:08 web-2 systemd[1]: Started LSB: Start/stop uWSGI server instance(s).
The uWSGI app is configured as
[uwsgi]
uid = www-data
gid = www-data
socket = /srv/sockets/uwsgi.sock
chmod-socket = 666
master = true
processes = 4
enable-threads = true
plugins = python
chdir = /srv/sites/current
module = wsgi:application
virtualenv = /srv/environments/current
logto = /srv/logs/uwsgi.log
logfile-chown = true
touch-reload = /srv/sites/current/wsgi.py
The service starts fine and everything works ok if the service is started directly, e.g. by issuing command:
sudo -u www-data /usr/bin/uwsgi --ini /srv/configs/uwsgi.ini
For the socket dir and log files directories I've set the most relaxed permissions.
So, I'm at a loss. What to do next?
Thanks to nmgeeks comment, I started looking at other places where uWSGI is configured.
While trying to move everything our own app related under /srv I deleted /run/uwsgi directory, where uWSGI is trying to create a PID file.
You can find the reference at /usr/share/uwsgi/conf/default.ini.
Too bad uWSGI doesn't produce any error in this situation, which left me frustrated for a day.
I just want to put some additional details. In my case I didn't delete anything, but, for some reasons, systemd was not able to create /run/uwsgi directory. So, I did echo "/run/uwsgi 0755 www-data www-data -" > /etc/tmpfiles.d/uwsgi.conf. After that I restarted my VM (I didn't find a way to apply this changes without restarting) and uwsgi started to work!
Yes, it's too bad that uWSGI doesn't produce any error in this situation.
I have a django website, that uses an environment variable, DJANGO_MODE to decide which settings to use - development or staging. The environment variable is in the bashrc and when running the app using the development server, everything works fine.
But when I run the app using uWSGI, it doesn't seem to notice the environment variable and uses the default(development) settings instead of production.
I run uWSGI in Emperor mode, and other than the environment variable ignoring, everything seems to be running fine. And yes, the user running uWSGI is the same for which the bashrc has DJANGO_MODE set.
The command used to run uWSGI is -
exec uwsgi --emperor /etc/uwsgi/vassals --uid web_user --gid --web_user
And the ini file for the vassal -
[uwsgi]
processes = 2
socket = /tmp/uwsgi.sock
wsgi-file = /home/web_user/web/project_dir/project/wsgi.py
chdir = /home/web_user/web/project_dir
virtualenv = /home/web_user/.virtualenvs/production_env
logger = syslog
chmod-socket = 777
It cannot work as bash config files are read by the bash. You have to set var in the emperor or in the vassal (the second one is a better approach). Just add
env=DJANGO_MODE=foobar
to your config (do not use whitespace).
I deploy my django project with gunicorn, nginx, supervisord.
I installed a gunicorn in virtualenv, added in INSTALL_APPS.
The command ./manage.py run_gunicorn -b 127.0.0.1:8999 works:
2012-12-04 12:27:33 [21917] [INFO] Starting gunicorn 0.16.1
2012-12-04 12:27:33 [21917] [INFO] Listening at: http://127.0.0.1:8999 (21917)
2012-12-04 12:27:33 [21917] [INFO] Using worker: sync
2012-12-04 12:27:33 [22208] [INFO] Booting worker with pid: 22208
For nginx I edited nginx.conf:
server {
listen 111111111:80;
server_name my_site.pro;
access_log /home/user/logs/nginx_access.log;
error_log /home/user/logs/nginx-error.log;
location /static/ {
alias /home/user/my_project/static/;
}
location /media/ {
alias /home/user/my_project/media/;
}
location / {
proxy_pass http://127.0.0.1:8999;
include /etc/nginx/proxy.conf;
}
}
After that I restarted nginx.
supervisord.conf:
[unix_http_server]
file=/tmp/supervisor-my_project.sock
chmod=0700
chown=user:user
[supervisord]
logfile=/home/user/logs/supervisord.log
logfile_maxbytes=50MB
logfile_backups=10
loglevel=info
pidfile=/tmp/supervisord-my_project.pid
nodaemon=false
minfds=1024
minprocs=200
[rpcinterface:supervisor]
supervisor.rpcinterface_factory = supervisor.rpcinterface:make_main_rpcinterface
[supervisorctl]
serverurl=unix:///tmp/supervisor-my_project.sock
[program:gunicorn]
command=/home/user/bin/manage run_gunicorn -w 4 -b 127.0.0.1:8999 -t 300 --max- requests=1000
startsecs=10
stopwaitsecs=60
redirect_stderr=true
stdout_logfile=/home/user/gunicorn.log
I ran bin/supervisorctl start all. But I got:
gunicorn: ERROR (no such file)
What the file is missing? How can I deploy my project?
For future searchers, I had this problem and the issue was that I needed to provide a full path to the Gunicorn binary. For whatever reason, even with the PATH= environment variable specified supervisor couldn't find the binary. Once I /full_path/gunicorn it worked. (Maybe there's a way to do this correctly with environment variables)
Since you are using virtualenv you must set the environment to the PATH used by it in supervisord.conf.
Try this:
[program:gunicorn]
command=/home/user/bin/manage run_gunicorn -w 4 -b 127.0.0.1:8999 -t 300 --max-requests=1000
environment=PATH="/home/user/bin/"
...
This is assuming /home/user/bin/ is the path to your virtualenv.
I had same issue, actually I found out there is no gunicorn installed in my virtual environment.
Do
pip install gunicorn==<version_number>
I switched from installing supervisor using apt to using pip. This changed the path of supervisor and pidproxy from /usr/bin to /usr/local/bin/. I use pidproxy in my supervisor gunicorn.conf.
So just switching the supervisor install method meant not finding "pidproxy" and generated the message "gunicorn: ERROR (no such file)".
The poor error message could be referring to something other than gunicorn.
command=/usr/local/bin/pidproxy /var/run/gunicorn.pid /gunicorn.sh