Starting rqworker using supervisor causes spawn error - django

Trying to start rqworker as stated in its README using this command:
python manage.py rqworker default
For some reason it gives ERROR (spawn error) and status shows FATAL Exited too quickly (process log may have details). Logs has no any information for error (exit status 1; not expected).
My supervisor configuration:
[program:rqworker]
user=ubuntu
directory=/var/www/project/
command=/var/www/project/venv/bin/python manage.py rqworker default > /var/log/project/rq.log
stopsignal=TERM
autorestart=true
autostart=true
numprocs=1
Running command directly from ubuntu user works as expected.

I submitted a PR on how to set this up on Ubuntu which may help you.
https://github.com/W7PEA/django-rq/blob/4afc19ab9866882c1809f89f84066856c94d75c6/README.rst
Deploying on Ubuntu
Create an rqworker service that runs the high, default, and low queues.
sudo vi /etc/systemd/system/rqworker.service
[Unit]
Description=Django-RQ Worker
After=network.target
[Service]
WorkingDirectory=<<path_to_your_project_folder>>
ExecStart=/home/ubuntu/.virtualenv/<<your_virtualenv>>/bin/python \
<<path_to_your_project_folder>>/manage.py \
rqworker high default low
[Install]
WantedBy=multi-user.target
Enable and start the sevice
sudo systemctl enbable rqworker
sudo systemctl start rqworker

Related

systemctl enable gunicorn fails with error "/etc/systemd/system/gunicorn.service:8: Missing '='."

I am deploying Django application working on web server using gunicorn locally on WSL.
I need to enable needed systemd files. When I run command
systemctl enable gunicorn
I get error
"/etc/systemd/system/gunicorn.service:8: Missing '='. Failed to enable
unit, file gunicorn.service: Invalid argument.
gunicorn.service looking like that:
[Service]
User=root
Group=www-data
WorkingDirectory=<base_app_path>
Environment="PATH=<base_app_path>/env/bin"
EnvironmentFile=<base_app_path>/.env
ExecStart=<base_app_path>/env/bin/gunicorn \
--workers 4 \
--bind 0.0.0.0:8080 \
meeting.wsgi:application
RestartSec=10
Restart=always
[Install]
WantedBy=multi-user.target
What can be the problem here?

systemctl daemon-reload fails with Invalid Number of Arguments

On Amazon Linux 2 on ec2 instance I editted the elastic-agent service and I get an error "Invalid number of arguments"
[ec2-user#ip-172-31-21-92 ~]$ sudo systemctl daemon-reload elastic-agent
Invalid number of arguments.
I have edited my service file to look like this:
[Unit]
Description=Agent manages other beats based on configuration provided.
Documentation=https://www.elastic.co/products/beats/elastic-agent
Wants=network-online.target
After=network-online.target
[Service]
Environment="BEAT_LOG_OPTS="
Environment="BEAT_CONFIG_OPTS=-c /etc/elastic-agent/elastic-agent.yml"
Environment="BEAT_PATH_OPTS=--path.home /usr/share/elastic-agent --path.config /etc/elastic-agent --path.data /var/lib/elastic-agent --path.logs /var/log/elastic-agent"
ExecStart=/usr/share/elastic-agent/bin/elastic-agent --environment systemd $BEAT_LOG_OPTS $BEAT_CONFIG_OPTS $BEAT_PATH_OPTS -v
Restart=always
[Install]
WantedBy=multi-user.target
The only edit I made to the installed file was to add the "-v" flag to the end of ExecStart
No need to add elastic-agent, just run
sudo systemctl daemon-reload

systemctl strange error: Invalid arguments

Here's my service file:
[Unit]
Description=Daphne Interface
[Service]
ExecStartPre=cd /home/git/hsfzmun/server
ExecStart=/bin/bash/ -c "cd /home/git/hsfzmun/server && /home/git/virtualenvs/hsfzmun/bin/daphne -b 0.0.0.0 -p 8001 -v2 config.asgi:channel_layer"
Restart=always
KillSignal=SIGQUIT
Type=notify
NotifyAccess=all
[Install]
WantedBy=multi-user.target
When I execute sudo systemctl start daphnei I get:
Failed to start daphnei.service: Unit daphnei.service is not loaded properly: Invalid argument.
See system logs and 'systemctl status daphnei.service' for details.
And the result of systemctl status daphnei.service:
* daphnei.service - Daphne Interface
Loaded: error (Reason: Invalid argument)
Active: inactive (dead) (Result: exit-code) since Mon 2017-02-13 19:55:10 CST; 13min ago
Main PID: 16667 (code=exited, status=1/FAILURE)
What's wrong? I am using Ubuntu Server 16.04.
Generally, to debug exact cause of "Invalid argument", you can use:
sudo systemd-analyze verify daphnei.service
or in case of user's local service:
sudo systemd-analyze --user verify daphnei.service
Maybe you've figured it out by now, but there's an extra / after /bin/bash in your ExecStart line.
I may have a slightly newer version of systemd -- when I tried it, the output of systemctl status included this message:
Executable path specifies a directory, ignoring: /bin/bash/ -c "cd /home/git/hsfzmun/server && /home/git/virtualenvs/hsfzmun/bin/daphne -b 0.0.0.0 -p 8001 -v2 config.asgi:channel_layer"
Also, you might consider using a WorkingDirectory line in the service, instead of cd &&

convert following systemd into upstart for ubuntu 14.04 -- uwsgi emperor mode did not work as expected

I have the following script that was given to help me turn on uwsgi at emperor mode.
https://www.digitalocean.com/community/tutorials/how-to-serve-django-applications-with-uwsgi-and-nginx-on-centos-7
I have tweaked it slightly for the below:
[Unit]
Description=uWSGI Emperor service
[Service]
ExecStartPre=/bin/bash -c 'mkdir -p /run/uwsgi; chown vagrant:www-data /run/uwsgi'
ExecStart=/usr/local/bin/uwsgi --emperor /etc/uwsgi/sites
Restart=always
KillSignal=SIGQUIT
Type=notify
NotifyAccess=all
[Install]
WantedBy=multi-user.target
I needed to rewrite this for ubuntu 14.04 hence I wrote the below into /etc/init/uwsgi.conf
description "uWSGI Emperor service"
## equivalent to wantedby=multi-user.target
start on runlevel [2345]
# create a directory needed by the daemon
pre-start exec /bin/bash -c 'mkdir -p /run/uwsgi; chown vagrant:www-data /run/uwsgi'
exec /usr/local/bin/uwsgi --emperor /etc/uwsgi/sites
respawn
kill signal SIGQUIT
expect stop
What was expected:
/run/uwsgi will be created with the ownership as vagrant:www-data
it will have 2 .sock files called djangoonpy2.sock and djangoonpy3.sock
What I did see:
/run/uwsgi will be created with the ownership as vagrant:www-data
it only has 1 .sock file called djangoonpy2.sock
I am runnning ubuntu14.04 using vagrant as a virtual machine.
Please advise.

How to write an Ubuntu Upstart job for Celery (django-celery) in a virtualenv

I really enjoy using upstart. I currently have upstart jobs to run different gunicorn instances in a number of virtualenvs. However, the 2-3 examples I found for Celery upstart scripts on the interwebs don't work for me.
So, with the following variables, how would I write an Upstart job to run django-celery in a virtualenv.
Path to Django Project:
/srv/projects/django_project
Path to this project's virtualenv:
/srv/environments/django_project
Path to celery settings is the Django project settings file (django-celery):
/srv/projects/django_project/settings.py
Path to the log file for this Celery instance:
/srv/logs/celery.log
For this virtual env, the user:
iamtheuser
and the group:
www-data
I want to run the Celery Daemon with celerybeat, so, the command I want to pass to the django-admin.py (or manage.py) is:
python manage.py celeryd -B
It'll be even better if the script starts after the gunicorn job starts, and stops when the gunicorn job stops. Let's say the file for that is:
/etc/init/gunicorn.conf
You may need to add some more configuration, but this is an upstart script I wrote for starting django-celery as a particular user in a virtualenv:
start on started mysql
stop on stopping mysql
exec su -s /bin/sh -c 'exec "$0" "$#"' user -- /home/user/project/venv/bin/python /home/user/project/django_project/manage.py celeryd
respawn
It works great for me.
I know that it looks ugly, but it appears to be the current 'proper' technique for running upstart jobs as unprivileged users, based on this superuser answer.
I thought that I would have had to do more to get it to work inside of the virtualenv, but calling the python binary inside the virtualenv is all it takes.
Here is my working config using the newer systemd running on Ubuntu 16.04 LTS. Celery is in a virtualenv. App is a Python/Flask.
Systemd file: /etc/systemd/system/celery.service
You'll want to change the user and paths.
[Unit]
Description=Celery Service
After=network.target
[Service]
Type=forking
User=nick
Group=nick
EnvironmentFile=-/home/nick/myapp/server_configs/celery_env.conf
WorkingDirectory=/home/nick/myapp
ExecStart=/bin/sh -c '${CELERY_BIN} multi start ${CELERYD_NODES} \
-A ${CELERY_APP} --pidfile=${CELERYD_PID_FILE} \
--logfile=${CELERYD_LOG_FILE} --loglevel=${CELERYD_LOG_LEVEL} ${CELERYD_OPTS}'
ExecStop=/bin/sh -c '${CELERY_BIN} multi stopwait ${CELERYD_NODES} \
--pidfile=${CELERYD_PID_FILE}'
ExecReload=/bin/sh -c '${CELERY_BIN} multi restart ${CELERYD_NODES} \
-A ${CELERY_APP} --pidfile=${CELERYD_PID_FILE} \
--logfile=${CELERYD_LOG_FILE} --loglevel=${CELERYD_LOG_LEVEL} ${CELERYD_OPTS}'
[Install]
WantedBy=multi-user.target
Environment file (referenced above):/home/nick/myapp/server_configs/celery_env.conf
# Name of nodes to start
# here we have a single node
CELERYD_NODES="w1"
# or we could have three nodes:
#CELERYD_NODES="w1 w2 w3"
# Absolute or relative path to the 'celery' command:
CELERY_BIN="/home/nick/myapp/venv/bin/celery"
# App instance to use
CELERY_APP="myapp.tasks"
# How to call manage.py
CELERYD_MULTI="multi"
# Extra command-line arguments to the worker
CELERYD_OPTS="--time-limit=300 --concurrency=8"
# - %n will be replaced with the first part of the nodename.
# - %I will be replaced with the current child process index
# and is important when using the prefork pool to avoid race conditions.
CELERYD_PID_FILE="/var/run/celery/%n.pid"
CELERYD_LOG_FILE="/var/log/celery/%n%I.log"
CELERYD_LOG_LEVEL="INFO"
To automatically create the log and run folder with the correct permissions for your user, create a file in /usr/lib/tmpfiles.d. I was having trouble with the /var/run/celery folder being deleted on rebooting and then celery could not start properly.
My /usr/lib/tmpfiles.d/celery.conf file:
d /var/log/celery 2775 nick nick -
d /var/run/celery 2775 nick nick -
To enable: sudo systemctl enable celery.service
Now you'll need to reboot your system for the /var/log/celery and /var/run/celery folders to be created. You can check to see if celery started after rebooting by checking the logs in /var/log/celery.
To restart celery: sudo systemctl restart celery.service
Debugging: tail -f /var/log/syslog and try restarting celery to see what the error is. It could be related to the backend or other things.
Hope this helps!