I am trying to configure gunicorn and supervisor for Django. I have configured gunicorn and i can run the django app using gunicorn manually. Now i have tried to configure the supervisor, the issue is that gunicorn process is not being started on instance restart. If i start the app from supervisorctl manually then app will start running.
When i see status in supervisorctl, it is FATAL and stderr says
Traceback (most recent call last):
File "/subscription/app/subscriptionapp/venvs/subscriptionapp/bin/gunicorn", line 11, in <module>
sys.exit(run())
File "/subscription/app/subscriptionapp/venvs/subscriptionapp/lib/python3.4/site-packages/gunicorn/app/wsgiapp.py", line 75, in run
WSGIApplication("%(prog)s [OPTIONS] [APP_MODULE]").run()
File "/subscription/app/subscriptionapp/venvs/subscriptionapp/lib/python3.4/site-packages/gunicorn/app/base.py", line 189, in run
super(Application, self).run()
File "/subscription/app/subscriptionapp/venvs/subscriptionapp/lib/python3.4/site-packages/gunicorn/app/base.py", line 72, in run
Arbiter(self).run()
File "/subscription/app/subscriptionapp/venvs/subscriptionapp/lib/python3.4/site-packages/gunicorn/arbiter.py", line 58, in __init__
self.setup(app)
File "/subscription/app/subscriptionapp/venvs/subscriptionapp/lib/python3.4/site-packages/gunicorn/arbiter.py", line 114, in setup
self.app.wsgi()
File "/subscription/app/subscriptionapp/venvs/subscriptionapp/lib/python3.4/site-packages/gunicorn/app/base.py", line 67, in wsgi
self.callable = self.load()
File "/subscription/app/subscriptionapp/venvs/subscriptionapp/lib/python3.4/site-packages/gunicorn/app/wsgiapp.py", line 66, in load
return self.load_wsgiapp()
File "/subscription/app/subscriptionapp/venvs/subscriptionapp/lib/python3.4/site-packages/gunicorn/app/wsgiapp.py", line 52, in load_wsgiapp
return util.import_app(self.app_uri)
File "/subscription/app/subscriptionapp/venvs/subscriptionapp/lib/python3.4/site-packages/gunicorn/util.py", line 355, in import_app
__import__(module)
ImportError: No module named 'config'
django app structure
subscription-project \
.codeintel
.ebextensions
.git
.gitignore
Makefile
Procfile
README.rst
common
config \
__init__.py
settings
urls.py
wsgi.py
djangoapps
locale
manage.py
requirements
runtime.txt
subscriptionapp_gunicorn.py
import multiprocessing
preload_app = True
timeout = 300
bind = "127.0.0.1:8000"
pythonpath = "/subscription/app/subscriptionapp/subscription-project"
workers = (multiprocessing.cpu_count()-1)
upstart supervisor config (/etc/init/supervisor.conf)
description "supervisord"
start on runlevel [2345]
stop on runlevel [!2345]
kill timeout 432000
setuid www-data
exec /subscription/app/supervisor/venvs/supervisor/bin/supervisord -n --configuration /subscription/app/supervisor/supervisord.conf
/subscription/app/supervisor/conf.d/subscriptionapp.conf
[program:subscriptionapp]
command=/subscription/app/subscriptionapp/venvs/subscriptionapp/bin/gunicorn -c /subscription/app/subscriptionapp/subscriptionapp_gunicorn.py config.wsgi
user=www-data
directory=/subscription/app/subscriptionapp/subscription-project
environment=PORT=8000,ADDRESS=127.0.0.1,LANG=en_US.UTF-8,DJANGO_SETTINGS_MODULE=config.settings.base,PATH="/subscription/app/subscriptionapp/venvs/subscriptionapp/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
stdout_logfile=/subscription/var/log/supervisor/subscriptionapp-stdout.log
stderr_logfile=/subscription/var/log/supervisor/subscriptionapp-stderr.log
killasgroup=true
stopasgroup=true
supervisord.conf
; supervisor config file
[unix_http_server]
file=/subscription/var/supervisor/supervisor.sock ; (the path to the socket file)
chmod=0700 ; sockef file mode (default 0700)
[supervisord]
logfile=/subscription/var/log/supervisor/supervisord.log ; (main log file;default $CWD/supervisord.log)
pidfile=/subscription/var/supervisor/supervisord.pid ; (supervisord pidfile;default supervisord.pid)
childlogdir=/subscription/var/log/supervisor ; ('AUTO' child log dir, default $TEMP)
; the below section must remain in the config file for RPC
; (supervisorctl/web interface) to work, additional interfaces may be
; added by defining them in separate rpcinterface: sections
[rpcinterface:supervisor]
supervisor.rpcinterface_factory = supervisor.rpcinterface:make_main_rpcinterface
[supervisorctl]
serverurl=unix:///subscription/var/supervisor/supervisor.sock ; use a unix:// URL for a unix socket
; The [include] section can just contain the "files" setting. This
; setting can list multiple files (separated by whitespace or
; newlines). It can also contain wildcards. The filenames are
; interpreted as relative to this file. Included files *cannot*
; include files themselves.
[inet_http_server]
port = 127.0.0.1:9001
[include]
files = /subscription/app/supervisor/conf.d/*.conf
Any help to resolve the issue will be highly appreciated.
It seems the config package is not available to gunicorn. Try explicitly adding the project directory to pythonpath like so:
/subscription/app/supervisor/conf.d/subscriptionapp.conf
[program:subscriptionapp]
command=/subscription/app/subscriptionapp/venvs/subscriptionapp/bin/gunicorn -c /subscription/app/subscriptionapp/subscriptionapp_gunicorn.py --pythonpath '/subscription/app/subscriptionapp/subscription-project,/subscription/app/subscriptionapp/venvs/subscriptionapp/lib/python3.4/site-packages/' config.wsgi
user=www-data
directory=/subscription/app/subscriptionapp/subscription-project
environment=PORT=8000,ADDRESS=127.0.0.1,LANG=en_US.UTF-8,DJANGO_SETTINGS_MODULE=config.settings.base,PATH="/subscription/app/subscriptionapp/venvs/subscriptionapp/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
stdout_logfile=/subscription/var/log/supervisor/subscriptionapp-stdout.log
stderr_logfile=/subscription/var/log/supervisor/subscriptionapp-stderr.log
killasgroup=true
stopasgroup=true
If this helps gunicorn to "find" config, if may be your gunicorn configuration file is not being loaded properly. Depending on your gunicorn version, loading settings from a python module may require a special form, as in docs:
Changed in version 19.4: Loading the config from a Python module requires the python: prefix.
Related
(venv) ubuntu#ip-172-31-6-77:~/redrebelgames_python$ gunicorn redrebelgames_python.wsgi:application
[2021-11-25 20:01:09 +0000] [3758] [INFO] Starting gunicorn 20.1.0
Traceback (most recent call last):
File "/home/ubuntu/redrebelgames_python/venv/bin/gunicorn", line 8, in <module>
sys.exit(run())
File "/home/ubuntu/redrebelgames_python/venv/lib/python3.8/site-packages/gunicorn/app/wsgiapp.py", line 67, in run
WSGIApplication("%(prog)s [OPTIONS] [APP_MODULE]").run()
File "/home/ubuntu/redrebelgames_python/venv/lib/python3.8/site-packages/gunicorn/app/base.py", line 231, in run
super().run()
File "/home/ubuntu/redrebelgames_python/venv/lib/python3.8/site-packages/gunicorn/app/base.py", line 72, in run
Arbiter(self).run()
File "/home/ubuntu/redrebelgames_python/venv/lib/python3.8/site-packages/gunicorn/arbiter.py", line 198, in run
self.start()
File "/home/ubuntu/redrebelgames_python/venv/lib/python3.8/site-packages/gunicorn/arbiter.py", line 155, in start
self.LISTENERS = sock.create_sockets(self.cfg, self.log, fds)
File "/home/ubuntu/redrebelgames_python/venv/lib/python3.8/site-packages/gunicorn/sock.py", line 162, in create_sockets
raise ValueError('certfile "%s" does not exist' % conf.certfile)
ValueError: certfile "/etc/letsencrypt/live/api.redrebelgames.com/cert.pem" does not exist
How do I allow gunicorn to access these files? For some reason it's not working and simply changing the chmod permissions won't work because certbot will eventually change them back.
The certbot files are owned by one identity (typically root). You are running Gunicorn under a different identity. The key is to grant permission to the Gunicorn identity to read the Let's Encrypt files. Typically you can add the Gunicorn username to the Let's Encrypt identity group name and make the files readable by the group.
Example command:
sudo usermod -a -G groupname username
The identity username must re-login after changing group membership. It is simpler to just restart the system.
Another method (not recommended) is to run Gunicorn as a privileged process. That has security risks.
I am following this tutorial to setup a Django-gunicorn-nginx server in AWS EC2. After installing all dependancies and making a change in wsgi.py as follows
import os, sys
# add the hellodjango project path into the sys.path
sys.path.append('/home/ubuntu/project/ToDo-application/')
# add the virtualenv site-packages path to the sys.path
sys.path.append('/home/ubuntu/.local/lib/python3.6/site-packages')
# poiting to the project settings
os.environ.setdefault("DJANGO_SETTINGS_MODULE", "todo_app.settings")
from django.core.wsgi import get_wsgi_application
application = get_wsgi_application()
I run gunicorn todo_app.wsgi and get the following error:
ubuntu#ip-172-31-61-163:~/project/ToDo-application$ gunicorn todo_app.wsgi
[2018-11-07 11:25:35 +0000] [8211] [INFO] Starting gunicorn 19.7.1
[2018-11-07 11:25:35 +0000] [8211] [INFO] Listening at: http://127.0.0.1:8000 (8211)
[2018-11-07 11:25:35 +0000] [8211] [INFO] Using worker: sync
[2018-11-07 11:25:35 +0000] [8215] [INFO] Booting worker with pid: 8215
[2018-11-07 11:25:35 +0000] [8215] [ERROR] Exception in worker process
Traceback (most recent call last):
File "/usr/lib/python2.7/dist-packages/gunicorn/arbiter.py", line 578, in spawn_worker
worker.init_process()
File "/usr/lib/python2.7/dist-packages/gunicorn/workers/base.py", line 126, in init_process
self.load_wsgi()
File "/usr/lib/python2.7/dist-packages/gunicorn/workers/base.py", line 135, in load_wsgi
self.wsgi = self.app.wsgi()
File "/usr/lib/python2.7/dist-packages/gunicorn/app/base.py", line 67, in wsgi
self.callable = self.load()
File "/usr/lib/python2.7/dist-packages/gunicorn/app/wsgiapp.py", line 65, in load
return self.load_wsgiapp()
File "/usr/lib/python2.7/dist-packages/gunicorn/app/wsgiapp.py", line 52, in load_wsgiapp
return util.import_app(self.app_uri)
File "/usr/lib/python2.7/dist-packages/gunicorn/util.py", line 377, in import_app
__import__(module)
File "/home/ubuntu/urbanpiper/ToDo-application/todo_app/wsgi.py", line 20, in <module>
from django.core.wsgi import get_wsgi_application
File "/home/ubuntu/.local/lib/python3.6/site-packages/django/__init__.py", line 1, in <module>
from django.utils.version import get_version
File "/home/ubuntu/.local/lib/python3.6/site-packages/django/utils/version.py", line 71, in <module>
#functools.lru_cache()
AttributeError: 'module' object has no attribute 'lru_cache'
Is this because of gunicorn having python2 dependancies and Django being on python3? I tried uninstalling gunicorn and trying it again but it did not work.
# WRONG:
# add the virtualenv site-packages path to the sys.path
sys.path.append('/home/ubuntu/.local/lib/python3.6/site-packages')
You ought to create a virutalenv for each uwsgi application you wish to host on the server, rather than setting the virtualenv to the path above. If you followed the linked tutorial word-by-word, then this is the part which needs more explaining:
Make a virtualenv and install your pip requirements
Essentially:
# install virtualenv3
sudo apt-get install virtualenv3
# create the virtual environment, specifically for the stated python version
virtualenv -p python3.6 TITLE_OF_VENV
# You now have a directory called TITLE_OF_VENV (You may wish to replace this
# with something more subtle).
# Activate the virtualenv for your current shell session
. TITLE_OF_VENV/bin/activate
# The dot above is intentional and is a quick way to write source, which
# imports the environment vars
Your shell prompt should now look like this: (TITLE_OF_VENV) ubuntu#ip-172-31-61-163:~/project/ToDo-application$ indicating that the venv is active. To switch out of the venv run the command deactivate.
Anything which you install with pip here will then live in the directory TITLE_OF_VENV/python3.6/site-packages (while this virutal environment is active). This has the advantage of keeping different project requirements separate.
Test the python version (with the venv still active):
(TITLE_OF_VENV)$ python --version
Python 3.6
Now install gunicorn into this virtual environment, along with any other project requirements:
(TITLE_OF_VENV)$ pip install gunicorn
(TITLE_OF_VENV)$ pip install -r requirements.txt
Update your uwsgi.py:
import os
# poiting to the project settings
os.environ.setdefault("DJANGO_SETTINGS_MODULE", "todo_app.settings")
from django.core.wsgi import get_wsgi_application
application = get_wsgi_application()
And then launch it from within the virtual environment:
(TITLE_OF_VENV)$ gunicorn todo_app.wsgi:application
You could add the -D flag to the gunicorn command also, which makes it run in the background. Also don't make this server publicly accessible. If it's a production box, you need to run it behind nginx!
I'm trying to assemble some of the missing pieces in my understanding of how to deploy a Django App to heroku, so that I can launch an instance of Newsdiffs on Heroku.
When I walk through the instructions for running Django on Heroku they have you add a line to Procfile that reads thus: web: gunicorn hellodjango.wsgi --log-file -
But there's no actual file named "hellodjango.wsgi" so ... in that tutorial, where is the "hellodjango.wsgi" module created?
And, perhaps more to the point, why is heroku local balking that I have web.1 | : No module named newsdiffs.wsgi when newdsdiffs/wsgi.py definitely exists.
I can launch the app locally with python website/manage.py runserver but if I do gunicorn newsdiffs.wsgi I get the following, which doesn't include any obvious indications (to my eye) of what I'm doing wrong:
(venv)amanda#mona:newsdiffs$ gunicorn newsdiffs.wsgi
Traceback (most recent call last):
File "/home/amanda/Public/newsdiffs/venv/bin/gunicorn", line 11, in <module>
sys.exit(run())
File "/home/amanda/Public/newsdiffs/venv/local/lib/python2.7/site-packages/gunicorn/app/wsgiapp.py", line 74, in run
WSGIApplication("%(prog)s [OPTIONS] [APP_MODULE]").run()
File "/home/amanda/Public/newsdiffs/venv/local/lib/python2.7/site-packages/gunicorn/app/base.py", line 185, in run
super(Application, self).run()
File "/home/amanda/Public/newsdiffs/venv/local/lib/python2.7/site-packages/gunicorn/app/base.py", line 71, in run
Arbiter(self).run()
File "/home/amanda/Public/newsdiffs/venv/local/lib/python2.7/site-packages/gunicorn/arbiter.py", line 169, in run
self.manage_workers()
File "/home/amanda/Public/newsdiffs/venv/local/lib/python2.7/site-packages/gunicorn/arbiter.py", line 477, in manage_workers
self.spawn_workers()
File "/home/amanda/Public/newsdiffs/venv/local/lib/python2.7/site-packages/gunicorn/arbiter.py", line 542, in spawn_workers
time.sleep(0.1 * random.random())
File "/home/amanda/Public/newsdiffs/venv/local/lib/python2.7/site-packages/gunicorn/arbiter.py", line 209, in handle_chld
self.reap_workers()
File "/home/amanda/Public/newsdiffs/venv/local/lib/python2.7/site-packages/gunicorn/arbiter.py", line 459, in reap_workers
raise HaltServer(reason, self.WORKER_BOOT_ERROR)
gunicorn.errors.HaltServer: <HaltServer 'Worker failed to boot.' 3>
The gunicorn command takes the name of a Python module, not a path to a file. If hellodjango.wsgi is the name of the Python module, the corresponding file will be hellodjango/wsgi.py or hellodjango/wsgi/__init__.py.
This is the same syntax used to refer to a module when importing it, e.g. you would import * from hellodjango.wsgi to get access to the things defined in hellodjango/wsgi.py.
The django-admin startproject command will create a wsgi.py file in the same directory as the project's settings.py and urls.py files.
Sending emails with Celery works fine on production server.
Trying to use it on local dev (VM) and does not work.
I get this when restart:
Starting web server apache2 [ OK ]
Starting message broker rabbitmq-server [ OK ]
Starting celery task worker server celeryd [ OK ]
Starting celeryev...
: No such file or directory
Also I get this error in console when running the page:
error: [Errno 104] Connection reset by peer
Production setting:
import djcelery
djcelery.setup_loader()
BROKER_HOST = "127.0.0.1"
BROKER_PORT = 5672 # default RabbitMQ listening port
BROKER_USER = "vs_user"
BROKER_PASSWORD = "user01"
BROKER_VHOST = "vs_vhost"
CELERY_BACKEND = "amqp" # telling Celery to report the results back to RabbitMQ
CELERY_RESULT_DBURI = ""
When i ran:
sudo rabbitmqctl list_vhosts
I get this:
Listing vhosts ...
/
...done.
What i need to change in this setting to run it successfully on local VM?
UPDATE
vhost and user were definitely missing so I ran suggested commands.
They executed ok but still it does not work ,same error.
It must be one more thing that prevents it from working and celeryev is suspect.
This is what i get when stopping and starting server:
Stopping web server apache2 ... waiting . [ OK ]
Stopping message broker rabbitmq-server [ OK ]
Stopping celery task worker server celeryd start-stop-daemon: warning: failed to kill 28006: No such process
[ OK ]
Stopping celeryev...NOT RUNNING
Starting web server apache2 [ OK ]
Starting message broker rabbitmq-server [ OK ]
Starting celery task worker server celeryd [ OK ]
Starting celeryev...
: No such file or directory
Traceback (most recent call last):
File "/webapps/target/forums/json_views.py", line 497, in _send_forum_notifications
post_master_json.delay('ForumNotificationEmail', email_params)
File "/usr/local/lib/python2.6/dist-packages/celery-3.0.25-py2.6.egg/celery/app/task.py", line 357, in delay
return self.apply_async(args, kwargs)
File "/usr/local/lib/python2.6/dist-packages/celery-3.0.25-py2.6.egg/celery/app/task.py", line 474, in apply_async
**options)
File "/usr/local/lib/python2.6/dist-packages/celery-3.0.25-py2.6.egg/celery/app/amqp.py", line 250, in publish_task
**kwargs
File "/usr/local/lib/python2.6/dist-packages/kombu-2.5.16-py2.6.egg/kombu/messaging.py", line 164, in publish
routing_key, mandatory, immediate, exchange, declare)
File "/usr/local/lib/python2.6/dist-packages/kombu-2.5.16-py2.6.egg/kombu/connection.py", line 470, in _ensured
interval_max)
File "/usr/local/lib/python2.6/dist-packages/kombu-2.5.16-py2.6.egg/kombu/connection.py", line 396, in ensure_connection
interval_start, interval_step, interval_max, callback)
File "/usr/local/lib/python2.6/dist-packages/kombu-2.5.16-py2.6.egg/kombu/utils/__init__.py", line 217, in retry_over_time
return fun(*args, **kwargs)
File "/usr/local/lib/python2.6/dist-packages/kombu-2.5.16-py2.6.egg/kombu/connection.py", line 246, in connect
return self.connection
File "/usr/local/lib/python2.6/dist-packages/kombu-2.5.16-py2.6.egg/kombu/connection.py", line 761, in connection
self._connection = self._establish_connection()
File "/usr/local/lib/python2.6/dist-packages/kombu-2.5.16-py2.6.egg/kombu/connection.py", line 720, in _establish_connection
conn = self.transport.establish_connection()
File "/usr/local/lib/python2.6/dist-packages/kombu-2.5.16-py2.6.egg/kombu/transport/pyamqp.py", line 115, in establish_connection
conn = self.Connection(**opts)
File "/usr/local/lib/python2.6/dist-packages/amqp-1.0.13-py2.6.egg/amqp/connection.py", line 136, in __init__
self.transport = create_transport(host, connect_timeout, ssl)
File "/usr/local/lib/python2.6/dist-packages/amqp-1.0.13-py2.6.egg/amqp/transport.py", line 264, in create_transport
return TCPTransport(host, connect_timeout)
File "/usr/local/lib/python2.6/dist-packages/amqp-1.0.13-py2.6.egg/amqp/transport.py", line 99, in __init__
raise socket.error(last_err)
error: timed out
I ran manage.py celeryev and got console showing workers and tasks.Everything is empty and only getting Connection Error: error(timeout('timed out',),) repeatedly.
It looks like you don't have the virtual host you specified set up on your local RabbitMQ server.
You would first need to add the virtual host.
sudo rabbitmqctl add_vhost vs_vhost
Next you need to add the permissions for your user.
sudo rabbitmqctl set_permissions -p vs_vhost vs_user ".*" ".*" ".*"
Also, make sure that you actually have a user set up, otherwise you can add one using this command.
sudo rabbitmqctl add_user vs_user user01
I've been trying to get django-pipeline to work to combine and minify my css and js assets. I don't seem to be able to sort the following issue out. When I run:
python manage.py collectstatic --noinput
I get an error:
pipeline.exceptions.CompressorError: The system cannot find the path specified.
Do I maybe need to install some additional packages? If so, how?
My settings for django-pipeline:
STATICFILES_STORAGE = 'pipeline.storage.PipelineCachedStorage'
STATICFILES_FINDERS = (
'django.contrib.staticfiles.finders.FileSystemFinder',
'django.contrib.staticfiles.finders.AppDirectoriesFinder',
'pipeline.finders.PipelineFinder',
)
PIPELINE_CSS = {
'testme': {
'source_filenames': {
'static/surveys/css/main.css',
},
'output_filename': 'css/testme.css',
},
}
PIPELINE_JS = {
'testmejs': {
'source_filenames': {
'surveys/js/gklib.js',
},
'output_filename': 'surveys/js/testmejs.css',
},
}
PIPELINE_ENABLED = True
This is the complete output:
Traceback (most recent call last):
File "manage.py", line 10, in <module>
execute_from_command_line(sys.argv)
File "d:\Development\django\TopDish\env_mrg_tacx_laptop\lib\site-packages\django\core\management\__init__.py", line 385, in execute_from_command_line
utility.execute()
File "d:\Development\django\TopDish\env_mrg_tacx_laptop\lib\site-packages\django\core\management\__init__.py", line 377, in execute
self.fetch_command(subcommand).run_from_argv(self.argv)
File "d:\Development\django\TopDish\env_mrg_tacx_laptop\lib\site-packages\django\core\management\base.py", line 288, in run_from_argv
self.execute(*args, **options.__dict__)
File "d:\Development\django\TopDish\env_mrg_tacx_laptop\lib\site-packages\django\core\management\base.py", line 338, in execute
output = self.handle(*args, **options)
File "d:\Development\django\TopDish\env_mrg_tacx_laptop\lib\site-packages\django\core\management\base.py", line 533, in handle
return self.handle_noargs(**options)
File "d:\Development\django\TopDish\env_mrg_tacx_laptop\lib\site-packages\django\contrib\staticfiles\management\commands\collectstatic.py", line 168, in handle_noargs
collected = self.collect()
File "d:\Development\django\TopDish\env_mrg_tacx_laptop\lib\site-packages\django\contrib\staticfiles\management\commands\collectstatic.py", line 114, in collect
for original_path, processed_path, processed in processor:
File "d:\Development\django\TopDish\env_mrg_tacx_laptop\lib\site-packages\pipeline\storage.py", line 36, in post_process
packager.pack_javascripts(package)
File "d:\Development\django\TopDish\env_mrg_tacx_laptop\lib\site-packages\pipeline\packager.py", line 112, in pack_javascripts
return self.pack(package, self.compressor.compress_js, js_compressed, templates=package.templates, **kwargs)
File "d:\Development\django\TopDish\env_mrg_tacx_laptop\lib\site-packages\pipeline\packager.py", line 106, in pack
content = compress(paths, **kwargs)
File "d:\Development\django\TopDish\env_mrg_tacx_laptop\lib\site-packages\pipeline\compressors\__init__.py", line 67, in compress_js
js = getattr(compressor(verbose=self.verbose), 'compress_js')(js)
File "d:\Development\django\TopDish\env_mrg_tacx_laptop\lib\site-packages\pipeline\compressors\yuglify.py", line 13, in compress_js
return self.compress_common(js, 'js', settings.PIPELINE_YUGLIFY_JS_ARGUMENTS)
File "d:\Development\django\TopDish\env_mrg_tacx_laptop\lib\site-packages\pipeline\compressors\yuglify.py", line 10, in compress_common
return self.execute_command(command, content)
File "d:\Development\django\TopDish\env_mrg_tacx_laptop\lib\site-packages\pipeline\compressors\__init__.py", line 240, in execute_command
raise CompressorError(stderr)
pipeline.exceptions.CompressorError: The system cannot find the path specified.
UPDATE
I've tried it again using another compressor:
PIPELINE_CSS_COMPRESSOR = 'pipeline.compressors.csstidy.CSSTidyCompressor'
This gives the exact same result, what could I be doing wrong?
UPDATE 2
If I set the compressors to None everything works, i.e. the files get combined and placed in the static files folder. They are also served correctly.
PIPELINE_CSS_COMPRESSOR = None
PIPELINE_JS_COMPRESSOR = None
So it must be something either in accessing or using the compressors. I'm running on Windows.
UPDATE 3
I've added some print() commands to init.py in /site-packages/pipeline/compressors/
class SubProcessCompressor(CompressorBase):
def execute_command(self, command, content):
import subprocess
print("Command: " + command)
The command is: /usr/bin/env/ yuglify --type=css --terminal
Which can (probably?) never work on Windows.
I've then tried to deploy it to AWS Elastic Beanstalk, but I get an error then as well:
INFO: Environment update is starting.
INFO: Deploying new version to instance(s).
ERROR: [Instance: i-75dc5e91 Module: AWSEBAutoScalingGroup ConfigSet: null] Command failed on instance. Return code: 1 Output: [CMD-AppDeploy/AppDeployStage0/EbExtensionPostBuild] command failed with error code 1: Error occurred during build: Command 01_collectstatic failed.
INFO: Command execution completed on all instances. Summary: [Successful: 0, Failed: 1].
INFO: New application version was deployed to running EC2 instances.
ERROR: Update environment operation is complete, but with errors. For more information, see troubleshooting documentation.
ERROR: Update environment operation is complete, but with errors. For more information, see troubleshooting documentation.
I know it's possible to manually set the location of the compressors bin, but where to set it for Elastic Beanstalk?
Any suggestions how to sort this out?
I've fixed it by adding commands to the .ebextensions/app.config file:
# these commands run before the application and web server are
# set up and the application version file is extracted
commands:
01_node_install:
# run this command from /tmp directory
cwd: /tmp
# don't run the command if node is already installed (file /usr/bin/node exists)
test: '[ ! -f /usr/bin/node ] && echo "node not installed"'
# install from epel repository
# flag -y for no-interaction installation
command: 'yum install -y nodejs npm --enablerepo=epel'
command: 'npm -g install yuglify'
Based these commands on what I found on http://qpleple.com/install-nodejs-on-elastic-beanstalk/