couldn't exec gunicorn_start EACCES - django

#!/bin/bash
NAME="trueguild" # Name of the applica$
DJANGODIR=/webapps/trueguild_django/trueguild # Django project dire$
SOCKFILE=/webapps/trueguild_django/run/gunicorn.sock # we will communicte $
USER=trueguild # the user to run as
GROUP=webapps # the group to run as
NUM_WORKERS=3 # how many worker pro$
DJANGO_SETTINGS_MODULE=trueguild.settings # which settings file$
DJANGO_WSGI_MODULE=trueguild.wsgi # WSGI module name
echo "Starting $NAME as $NAME"
# Activate the virtual environment
cd $DJANGODIR
source ../bin/activate
export DJANGO_SETTINGS_MODULE=$DJANGO_SETTINGS_MODULE
export PYTHONPATH=$DJANGODIR:$PYTHONPATH
# Create the run directory if it doesn't exist
RUNDIR=$(dirname $SOCKFILE)
test -d $RUNDIR || mkdir -p $RUNDIR
# Start your Django Unicorn
# Programs meant to be run under supervisor should not daemonize themselves (do$
exec ../bin/gunicorn ${DJANGO_WSGI_MODULE}:application \
--name $NAME \
--workers $NUM_WORKERS \
--user=$USER --group=$GROUP \
--bind=unix:$SOCKFILE \
--log-level=debug \
--log-file=-
I have the above gunicorn_start file.But when i start using supervisorctl then it throws this error
supervisor: couldn't exec /webapps/trueguild_django/bin/gunicorn_start: EACCES
supervisor: child process was not spawned
i am using django along with nginx and gunicorn.
How can i start the server using supervisor?
here is the supervisor.d conf file
; supervisor config file
[unix_http_server]
file=/var/run/supervisor.sock ; (the path to the socket file)
chmod=0700 ; sockef file mode (default 0700)
[supervisord]
logfile=/var/log/supervisor/supervisord.log ; (main log file;default $CWD/supervisord.log)
pidfile=/var/run/supervisord.pid ; (supervisord pidfile;default supervisord.pid)
childlogdir=/var/log/supervisor ; ('AUTO' child log dir, default $TEMP)
; the below section must remain in the config file for RPC
; (supervisorctl/web interface) to work, additional interfaces may be
; added by defining them in separate rpcinterface: sections
[rpcinterface:supervisor]
supervisor.rpcinterface_factory = supervisor.rpcinterface:make_main_rpcinterface
[supervisorctl]
serverurl=unix:///var/run/supervisor.sock ; use a unix:// URL for a unix socket
; The [include] section can just contain the "files" setting. This
; setting can list multiple files (separated by whitespace or
; newlines). It can also contain wildcards. The filenames are
; interpreted as relative to this file. Included files *cannot*
; include files themselves.
[include]
files = /etc/supervisor/conf.d/*.conf
and here is the trueguild.conf file inside conf.d directory
[program:trueguild]
command = /webapps/trueguild_django/bin/gunicorn_start ; Command to start app
user = trueguild ; User to run as
stdout_logfile = /webapps/trueguild_django/logs/gunicorn_supervisor.log ; Where to write log messages
redirect_stderr = true ; Save stderr in the same log
environment=LANG=en_US.UTF-8,LC_ALL=en_US.UTF-8 ; Set UTF-8 as default encoding

In the trueguild.conf file
add sh after command...
it'll be like
command =sh /webapps/trueguild_django/bin/gunicorn_start
It'll work.

Please make sure you have permission to execute gunicorn command under trueguild

Related

AWS Elastic Beanstalk ebextensions configuration doesn't work when new instance gets spawned

In my Laravel application, I have created a folder .ebextensions and it has the configuration to install supervisor in the EC2 instance.
When I deploy the application for the first time and the instance gets created, everything works fine. Supervisor gets installed.
But when the instance scales and a new EC2 gets spawned, it doesn't take the same configuration. I need to install supervisor manually on the newer instance.
Is there a way, where the newer instances would take the configuration from .ebextensions and run it in the similar way it did the first time?
This is the structure of the .ebextensions folder
.ebextensions
- supervisor
- setup.sh
- supervisor_laravel.conf
- supervisord.conf
- supervisor.config
setup.sh
#!/bin/bash
echo "Supervisor - starting setup"
. /opt/elasticbeanstalk/deployment/env
if [ ! -f /usr/bin/supervisord ]; then
echo "installing supervisor"
easy_install supervisor
else
echo "supervisor already installed"
fi
if [ ! -d /etc/supervisor ]; then
mkdir /etc/supervisor
echo "create supervisor directory"
fi
if [ ! -d /etc/supervisor/conf.d ]; then
mkdir /etc/supervisor/conf.d
echo "create supervisor configs directory"
fi
. /opt/elasticbeanstalk/deployment/env && cat .ebextensions/supervisor/supervisord.conf > /etc/supervisor/supervisord.conf
. /opt/elasticbeanstalk/deployment/env && cat .ebextensions/supervisor/supervisord.conf > /etc/supervisord.conf
. /opt/elasticbeanstalk/deployment/env && cat .ebextensions/supervisor/supervisor_laravel.conf > /etc/supervisor/conf.d/supervisor_laravel.conf
if ps aux | grep "[/]usr/bin/supervisord"; then
echo "supervisor is running"
else
echo "starting supervisor"
/usr/bin/supervisord
fi
/usr/bin/supervisorctl reread
/usr/bin/supervisorctl update
echo "Supervisor Running!"
yum -y install http://cbs.centos.org/kojifiles/packages/beanstalkd/1.9/3.el7/x86_64/beanstalkd-1.9-3.el7.x86_64.rpm
if ps aux | grep "[/]usr/bin/beanstalkd"; then
echo "beanstalkd is running"
else
echo "starting beanstalkd"
/bin/systemctl start beanstalkd.service
fi
echo "Beanstalkd Running..."
supervisor_laravel.conf
[program:worker]
process_name=%(program_name)s_%(process_num)02d
command=/usr/bin/php /var/www/html/artisan queue:work --tries=3 --timeout=0
autostart=true
autorestart=true
stopasgroup=true
killasgroup=true
;user=forge
numprocs=3
redirect_stderr=true
stderr_logfile=/var/log/supervisor_laravel.err.log
stdout_logfile=/var/log/supervisor_laravel.out.log
supervisor.conf
[unix_http_server]
file=/tmp/supervisor.sock ; (the path to the socket file)
[supervisord]
logfile=/tmp/supervisord.log ; (main log file;default $CWD/supervisord.log)
logfile_maxbytes=50MB ; (max main logfile bytes b4 rotation;default 50MB)
logfile_backups=10 ; (num of main logfile rotation backups;default 10)
loglevel=info ; (log level;default info; others: debug,warn,trace)
pidfile=/tmp/supervisord.pid ; (supervisord pidfile;default supervisord.pid)
nodaemon=false ; (start in foreground if true;default false)
minfds=1024 ; (min. avail startup file descriptors;default 1024)
minprocs=200 ; (min. avail process descriptors;default 200)
environment=SYMFONY_ENV=prod
[rpcinterface:supervisor]
supervisor.rpcinterface_factory = supervisor.rpcinterface:make_main_rpcinterface
[supervisorctl]
serverurl=unix:///tmp/supervisor.sock ; use a unix:// URL for a unix socket
[include]
files = /etc/supervisor/conf.d/*.conf
[inet_http_server]
port = 9000
username = user
password = pw
supervisor.config
container_commands:
01_install_supervisor:
command: ".ebextensions/supervisor/setup.sh"
I have found the answer. But slightly in a different way.
Initially, I had all the files in .ebextensions folder, I moved some of the files into a different folder called "awsconfig" (The name doesn't matter), and moved some of the commands of setup.sh file to .platform/hooks/postdeploy and gave the permission as chmod +x to setup.sh
Important
AWS cleans up the .ebextensions folder after they are executed. I read that here
I did this because supervisor was getting installed in the newer instance (the instance which gets created after scaling), but the configuration files weren't getting copied because there was no .ebextensions folder in the newer instance
To copy the configuration files after the instance gets deployed, I used platform hooks postdeploy.
You can read about platform-hooks-postdeploy here
This is the structure I have now
.ebextensions
- supervisor
- setup.sh
- supervisor.config
.platform
- hooks
- postdeploy
- setup.sh
awsconfig
- supervisor
- supervisor_laravel.conf
- supervisord.conf
These are the changes in setup.php file which is split into two files in different folders
.ebextensions/supervisor/setup.sh
#!/bin/bash
echo "Supervisor - starting setup"
. /opt/elasticbeanstalk/deployment/env
if [ ! -f /usr/bin/supervisord ]; then
echo "installing supervisor"
easy_install supervisor
else
echo "supervisor already installed"
fi
if [ ! -d /etc/supervisor ]; then
mkdir /etc/supervisor
echo "create supervisor directory"
fi
if [ ! -d /etc/supervisor/conf.d ]; then
mkdir /etc/supervisor/conf.d
echo "create supervisor configs directory"
fi
.platform/hooks/postdeploy/setup.sh
#!/bin/bash
. /opt/elasticbeanstalk/deployment/env && cat /var/www/html/awsconfig/supervisor/supervisord.conf > /etc/supervisor/supervisord.conf
. /opt/elasticbeanstalk/deployment/env && cat /var/www/html/awsconfig/supervisor/supervisord.conf > /etc/supervisord.conf
. /opt/elasticbeanstalk/deployment/env && cat /var/www/html/awsconfig/supervisor/supervisor_laravel.conf > /etc/supervisor/conf.d/supervisor_laravel.conf
if ps aux | grep "[/]usr/bin/supervisord"; then
echo "supervisor is running"
else
echo "starting supervisor"
/usr/bin/supervisord
fi
/usr/bin/supervisorctl reread
/usr/bin/supervisorctl update
echo "Supervisor Running!"
yum -y install http://cbs.centos.org/kojifiles/packages/beanstalkd/1.9/3.el7/x86_64/beanstalkd-1.9-3.el7.x86_64.rpm
if ps aux | grep "[/]usr/bin/beanstalkd"; then
echo "beanstalkd is running"
else
echo "starting beanstalkd"
/bin/systemctl start beanstalkd.service
fi
echo "Beanstalkd Running..."
The content of all other files remains the same as what I have posted in the question.

Celery wont run when initiated by supervisor on aws elasticbeanstalk

Update!
Got it to work by adding export LC_ALL=en_US.UTF-8 at the top of my bash script.
I am attempting to daemonize celery beat and worker. I have no troubles running celery or beat when I ssh into my Elastic Beanstalk instance and do the following steps:
cd /opt/python/current/app
/opt/python/run/venv/bin/celery -A myDjangoApp beat --loglevel=INFO
/opt/python/run/venv/bin/celery -A myDjangoApp worker --loglevel=INFO
Tasks are scheduled and able to execute with ease. However when I run the exact same commands with supervisor I am getting an uniformative error. From looking at supervisorctl status I see the process runs for a few seconds then fails. Upon further examination the the log files show me the following error:
File "/opt/python/run/venv/bin/celery", line 11, in <module>
sys.exit(main())
File "/opt/python/run/venv/local/lib/python3.6/site-packages/celery/__main__.py", line 15, in main
sys.exit(_main())
File "/opt/python/run/venv/local/lib/python3.6/site-packages/celery/bin/celery.py", line 213, in main
return celery(auto_envvar_prefix="CELERY")
File "/opt/python/run/venv/local/lib/python3.6/site-packages/click/core.py", line 829, in __call__
return self.main(*args, **kwargs)
File "/opt/python/run/venv/local/lib/python3.6/site-packages/click/core.py", line 760, in main
_verify_python3_env()
File "/opt/python/run/venv/local/lib/python3.6/site-packages/click/_unicodefun.py", line 130, in _verify_python3_env
" mitigation steps.{}".format(extra)
RuntimeError: Click will abort further execution because Python 3 was configured to use ASCII as encoding for the environment. Consult https://click.palletsprojects.com/python3/ for mitigation steps.
Listed below are my supervisor.conf and celery.sh(runs celery) files.
Following is my supervisor.conf file.
[unix_http_server]
file=/opt/python/run/supervisor.sock ; (the path to the socket file)
chmod=0777 ; socket file mode (default 0700)
;chown=nobody:nogroup ; socket file uid:gid owner
[supervisord]
logfile=/opt/python/log/supervisord.log ; (main log file;default $CWD/supervisord.log)
logfile_maxbytes=10MB ; (max main logfile bytes b4 rotation;default 50MB)
logfile_backups=10 ; (num of main logfile rotation backups;default 10)
loglevel=info ; (log level;default info; others: debug,warn,trace)
pidfile=/opt/python/run/supervisord.pid ; (supervisord pidfile;default supervisord.pid)
minfds=1024 ; (min. avail startup file descriptors;default 1024)
minprocs=200 ; (min. avail process descriptors;default 200)
directory=/opt/python/current/app ; (default is not to cd during start)
;nocleanup=true ; (don't clean up tempfiles at start;default false)
[rpcinterface:supervisor]
supervisor.rpcinterface_factory = supervisor.rpcinterface:make_main_rpcinterface
[supervisorctl]
serverurl=unix:///opt/python/run/supervisor.sock
[program:httpd]
command=/opt/python/bin/httpdlaunch
numprocs=1
directory=/opt/python/current/app
autostart=true
autorestart=unexpected
startsecs=1 ; number of secs prog must stay running (def. 1)
startretries=3 ; max # of serial start failures (default 3)
exitcodes=0,2 ; 'expected' exit codes for process (default 0,2)
killasgroup=false ; SIGKILL the UNIX process group (def false)
redirect_stderr=false
;[program:celeryWorker]
;user=root
;command=/opt/python/run/venv/bin/celery -A Daash worker --loglevel=INFO --daemon
;directory=/opt/python/current/app
;numprocs=1
;autostart=true
;autorestart=true
;startsecs=0
;stopwaitsecs=60
;killasgroup=true
[program:celeryBeat]
user=root
command=/opt/python/etc/celery.sh
directory=/opt/python/current/app
numprocs=1
autostart=true
autorestart=true
startsecs=0
stopwaitsecs=60
killasgroup=true
stdout_logfile = /opt/python/log/cel_stdout.log
stderr_logfile = /opt/python/log/cel.log
Following is my celery.sh file.I have mirrored the exact steps I run when manually running celery in the .sh file.
#!/usr/bin/env bash
cd /opt/python/current/app
export PATH=/opt/python/run/venv/bin:/usr/local/bin:/bin:/usr/bin:/usr/local/sbin:/usr/sbin:/sbin:/opt/aws/bin:/home/ec2-user/.local/bin:/home/ec2-user/bin
sudo /opt/python/run/venv/bin/celery -A myDjangoApp beat --loglevel=INFO
Judging by the stack trace you provided, the error happens inside the Click dependency, associated with an encoding limitation.
Click's documentation explains this error a bit further: https://click.palletsprojects.com/en/8.0.x/unicode-support/?highlight=ascii
The fact that you were able to run it interactively is because your user session has the locale and other environment variables all correctly set, which the root session created by supervisord probably does not.
In any case, setting these variables within your celery.sh script should do the trick:
export LC_ALL=C.UTF-8
export LANG=C.UTF-8

Gunicorn error 203 while setting up django project

I try setting up a django project with nginx and gunicorn but get a gunicorn error 203:
My installation is located at path /webapps/sarlex and all project files are owned by user "sarlex" and group "webapps".
The virtualenv is located at /webapps/sarlex/environment.
Gunicorn is located at /webapps/sarlex/environment/bin/ and owned by user sarlex.
Gunicorn configuration is set in /webapps/sarlex/gunicorn_start.sh:
NAME="sarlex" # Name of the application
DJANGODIR=/webapps/sarlex/ # Django project directory
SOCKFILE=/webapps/sarlex/run/gunicorn.sock # we will communicte using this unix socket
USER=sarlex # the user to run as
GROUP=webapps # the group to run as
NUM_WORKERS=9 # how many worker processes should Gunicorn spawn
DJANGO_SETTINGS_MODULE=sarlex.settings # which settings file should Django use
DJANGO_WSGI_MODULE=sarlex.wsgi # WSGI module name
echo "Starting $NAME as `whoami`"
# Activate the virtual environment
cd $DJANGODIR
source /webapps/sarlex/environment/bin/activate
export DJANGO_SETTINGS_MODULE=$DJANGO_SETTINGS_MODULE
export PYTHONPATH=$DJANGODIR:$PYTHONPATH
# Create the run directory if it doesn't exist
RUNDIR=$(dirname $SOCKFILE)
test -d $RUNDIR || mkdir -p $RUNDIR
# Start your Django Unicorn
exec gunicorn ${DJANGO_WSGI_MODULE}:application \
--name $NAME \
--workers $NUM_WORKERS \
--user=$USER --group=$GROUP \
--bind=unix:$SOCKFILE \
--log-level=debug \
--log-file=-
Executing gunicorn_start.sh works with environment activated and deactivated.
/etc/systemd/system/gunicorn.service is owned by root and has this content:
[Unit]
Description=gunicorn daemon
After=network.target
[Service]
User=sarlex
Group=webapps
WorkingDirectory=/webapps/sarlex
ExecStart=/webapps/sarlex/gunicorn_start.sh
[Install]
WantedBy=multi-user.target
I tried to replace "User=Sarlex" with "User=root" but still get the same error.
The correct gunicorn it taken
Thanks for your help
What exactly is your question?
The export for DJANGO_SETTINGS_MODULE was awkward.
Enclose expressions and variable references in double quotes
NAME="sarlex" # Name of the application
DJANGODIR=/webapps/sarlex/ # Django project directory
SOCKFILE=/webapps/sarlex/run/gunicorn.sock # we will communicte using this unix socket
USER=sarlex # the user to run as
GROUP=webapps # the group to run as
NUM_WORKERS=9 # how many worker processes should Gunicorn spawn
DJANGO_SETTINGS_MODULE=sarlex.settings # which settings file should Django use
DJANGO_WSGI_MODULE=sarlex.wsgi # WSGI module name
echo "Starting $NAME as `whoami`"
# Activate the virtual environment
source /webapps/sarlex/environment/bin/activate
cd "$DJANGODIR"
export DJANGO_SETTINGS_MODULE
export PYTHONPATH="$DJANGODIR":$PYTHONPATH
# Create the run directory if it doesn't exist
RUNDIR="$(dirname $SOCKFILE)"
test -d "$RUNDIR" || mkdir -p "$RUNDIR"
# Start your Django Unicorn
exec gunicorn "${DJANGO_WSGI_MODULE}:application" \
--name "$NAME" \
--workers "$NUM_WORKERS" \
--user="$USER" --group="$GROUP" \
--bind="unix:$SOCKFILE" \
--log-level=debug \
--log-file=-
Please show /var/log/nginx/ error messages.
Django failed starting because in one of my scripts I wrote "admin" in a path instead of "Admin".
This didn't cause a problem on my windows PC where I am developing my code.

How to config supervisor with Django channels and server daphne

I have a problem with my configuration supervisor, my file is in etc/supervisor/conf.d/realtimecolonybit.conf,
When I try command supervisorctl reread, show me the "No config updates to processes" and when I try the other command like this
supervisorctl status realtimecolonybit
Shows me this error
realtimecolonybit FATAL can't find command '/home/ubuntu/realtimecolonybit/bin/start.sh;'
And when try the supervisorctl start realtimecolonybit
show me this error
realtimecolonybit: ERROR (no such file)
My configuration in my file realtimecolonybit.conf is below
[program:realtimecolonybit]
command = /home/ubuntu/realtimecolonybit/bin/start.sh;
user = root
stdout_logfile = /home/ubuntu/realtimecolonybit/logs/realtimecolonybit.log;
redirect_strderr = true;
My configuration from my file start.sh is below
#!/bin/bash
NAME="realtimecolonybit"
DJANGODIR=/home/ubuntu/realtimecolonybit/colonybit
SOCKFILE=/home/ubuntu/realtimecolonybit/run/gunicorn.sock
USER=root
GROUP=root
NUM_WORKERS=3
DJANGO_SETTINGS_MODULE=colonybit.settings
echo "Starting $NAME as `whoami`"
cd $DJANGODIR
source /home/ubuntu/realtimecolonybit/bin/activate
# workon realtimecolonybit
export DJANGO_SETTINGS_MODULE=$DJANGO_SETTINGS_MODULE
export PYTHONPAHT=$DJANGODIR:$PYTHONPATH
RUNDIR=$(dirname $SOCKFILE)
test -d $RUNDIR || mkdir -p $RUNDIR
exec daphne -b 0.0.0.0 -p 8001 colonybit.asgi:application
When I run without supervisor like this
(realtimecolonybit)realtimecolonybit/#/ ./bin/start.sh
it's run ok and working well, but sometimes down the server
I try to run a Django 1.11 and django_channel with supervisor my app is in aws.
I solve my problem, the error was in file .conf, I remove ; and remove the .sh, change the start.sh to start
wrong command
command = /home/ubuntu/realtimecolonybit/bin/start.sh;
correct command
command = /home/ubuntu/realtimecolonybit/bin/start

Gunicorn settings for virtualenvwrapper

I am trying to deploy a test site which made using Django and virtualenvwrapper. I want to use nginx for requests.I used Taskbuster's tutorial. So my project layer is similar as below :
--abctasarim **main folder
--manage.py **django manage file
----/yogavidya ** project folder
----/yogavidya/wsgi.py **wsgi file
----/yogavidya/settings/base.py ***settings
I prepared a script to use with gunicorn. I addressed virtualenv to virtualenvwrapper envs
#!/bin/bash
NAME="yogavidya" #Name of the application (*)
DJANGODIR=/home/ytsejam/public_html/abctasarim # Django project directory (*)
SOCKFILE=/home/ytsejam/public_html/abctasarim/run/gunicorn.sock # we will communicate using this unix socket (*)
USER=ytsejam # the user to run as (*)
GROUP=webdata # the group to run as (*)
NUM_WORKERS=1 # how many worker processes should Gunicorn spawn (*)
DJANGO_SETTINGS_MODULE=yogavidya.settings.base # which settings file should Django use (*)
DJANGO_WSGI_MODULE=yogavidya.wsgi # WSGI module name (*)
echo "Starting $NAME as `whoami`"
# Activate the virtual environment
cd $DJANGODIR
source /home/ytsejam/.virtualenvs/yv_dev/bin/activate
#export /home/ytsejam/.virtualenvs/yv_dev/bin/postactivate
export DJANGO_SETTINGS_MODULE=$DJANGO_SETTINGS_MODULE
export PYTHONPATH=$DJANGODIR:$PYTHONPATH
# Create the run directory if it doesn't exist
RUNDIR=$(dirname $SOCKFILE)
test -d $RUNDIR || mkdir -p $RUNDIR
# Start your Django Unicorn
# Programs meant to be run under supervisor should not daemonize themselves (do not use --daemon)
exec /home/ytsejam/public_html/abctasarim/gunicorn \
--name $NAME \
--workers $NUM_WORKERS \
--env DJANGO_SETTINGS_MODULE=$DJANGO_SETTINGS_MODULE \
--pythonpath $DJANGODIR \
--user $USER \
--bind=unix:$SOCKFILE yogavidya.wsgi:application
When I try to run it , I am getting an error for my service file :
...
ImportError: No module named ' '
...
How can I fix my script to serve the site correctly ?
Thanks
virtualenvwrapper is supposed to be what you use in development. You want to deploy using the package that virtualenvwrapper is built on—the virtualenv package. The best suggestion I have for you for how to make this work would be to try the steps that you would normally use to start your virtualenvwrapper environment, namely sourcing the shell script and then using workon:
NAME="yogavidya" #Name of the application (*)
DJANGODIR=/home/ytsejam/public_html/abctasarim # Django project directory (*)
SOCKFILE=/home/ytsejam/public_html/abctasarim/run/gunicorn.sock # we will communicate using this unix socket (*)
USER=ytsejam # the user to run as (*)
GROUP=webdata # the group to run as (*)
NUM_WORKERS=1 # how many worker processes should Gunicorn spawn (*)
DJANGO_SETTINGS_MODULE=yogavidya.settings.base # which settings file should Django use (*)
DJANGO_WSGI_MODULE=yogavidya.wsgi # WSGI module name (*)
echo "Starting $NAME as `whoami`"
# Activate the virtual environment
cd $DJANGO_DIR
source /path/to/virtualenvwrapper.sh
workon yv_dev
You should also just try invoking gunicorn from the command line after activating your virtualenv.
Here’s how you can do it with virtualenv:
cd /home/ytsejam/public_html/abctasarim
sudo pip install virtualenv
virtualenv .
. bin/activate
pip install -r requirements.txt
pip install gunicorn
gunicorn script:
NAME="yogavidya" #Name of the application (*)
DJANGODIR=/home/ytsejam/public_html/abctasarim # Django project directory (*)
SOCKFILE=/home/ytsejam/public_html/abctasarim/run/gunicorn.sock # we will communicate using this unix socket (*)
USER=ytsejam # the user to run as (*)
GROUP=webdata # the group to run as (*)
NUM_WORKERS=1 # how many worker processes should Gunicorn spawn (*)
cd $DJANGO_DIR
. bin/activate
# Create the run directory if it doesn't exist
RUNDIR=$(dirname $SOCKFILE)
test -d $RUNDIR || mkdir -p $RUNDIR
exec /home/ytsejam/public_html/abctasarim/bin/gunicorn \
--name $NAME \
--workers $NUM_WORKERS \
--user $USER \
--bind=unix:$SOCKFILE yogavidya.wsgi:application