We have created RPMs from .spec files that include installation of services for both upstart and systemd (hence, both .conf and .service) files. But the .conf and .service files contain hardwired paths to the service files.
myservice.service:
[Unit]
Description=My Service
[Service]
WorkingDirectory=/opt/myproduct
ExecStart=/opt/myproduct/myservice /opt/myproduct/myservicearg
Restart=always
[Install]
WantedBy=multi-user.target
myservice.conf:
description "My Service"
respawn
respawn limit 15 5
start on (stopped rc and runlevel [2345])
stop on runlevel [06]
chdir /opt/myproduct
exec /opt/myproduct/myservice /opt/myproduct/myservicearg
The installation paths are likely to change, but the brute force search-and-replace seems stone-age.
I have used Ansible with .j2 (Jinja2) template files, which seems like a nice way to use a variable for the binary/script paths. Using them might look something like this:
myservice.service.j2:
[Unit]
Description=My Service
[Service]
WorkingDirectory={{ myproductpath }}
ExecStart={{ myproductpath }}/myservice {{ myproductpath }}/myservicearg
Restart=always
[Install]
WantedBy=multi-user.target
myservice.conf.j2:
description "My Service"
respawn
respawn limit 15 5
start on (stopped rc and runlevel [2345])
stop on runlevel [06]
chdir {{ myproductpath }}
exec {{ myproductpath }}/myservice {{ myproductpath }}/myservicearg
But I was unable to find anything that suggested this is a common approach for building RPMs. Is there a recommended way in RPMs to template these .conf and .service files, either filled in during RPM build or during install?
No. Rpm does not have any such templating tool. Most developers prefer classical sed:
%build
....
sed -i 's/{{ myproductpath }}/\/real\/path/g' myservice.conf.j2
mv myservice.conf.j2 myservice.conf
Or you can BuildRequires: ansible and let ansible expand it. But that is a quite heavy tool for this job.
Related
How to make DjangoQ run as a service using systemd?
There is a gap, djangoQ installation and running is like learning to run a Django project.
But now,
you want to push the project which is using DjangoQ to production.
The problem is you are not aware of server and Linux, maybe you are even coming from Windows....
So can somebody tell us, How to run DjangoQ in ubuntu using systemd
I am getting the error that ExecStart path is not absolute...
I came from here:
Stack Question
But that answer is not complete, too much tacit knowledge unshared...
[Unit]
Description= Async Tasks Runner
After=network.target remote-fs.target
[Service]
ExecStart=home/shreejvgassociates/ot_britain_college/env/bin/python manage.py qcluster --pythonpath home/shreejvgassociates/ot_britain_college/OverTimeCollege --settin$
RootDirectory=home/shreejvgassociates/ot_britain_college/OverTimeCollege
User=shreejvgassociates
Group=shreejvgassociates
Restart=always
[Install]
WantedBy=multi-user.target
It works well. Try it.
[Unit]
Description=Django-Q Cluster for site TestSite
After=multi-user.target
[Service]
Type=simple
ExecStart=/home/test/TestSite/venv/bin/python3 /home/test/TestSite/manage.py qcluster
[Install]
WantedBy=multi-user.target
I have a django project with gunicorn and nginx.
I'm deploying this project with saltstack
In this project, I have a config.ini file that django views read.
In case of nginx, I made that if nginx.conf changes, a state cmd.run service nginx restart with - onchanges - file: nginx_conf restarts the service.
but in case of gunicorn, I can detect the change of config.ini, but I don't know how to reload the gunicorn.
when gunicorn starts, I gave an option --reload but does this option detects change of config.ini not only django project's files'?
If not, what command should I use? (ex: gunicorn reload) ??
thank you.
ps. I saw kill -HUP pid but I think salt wouldn't knows gunicorn's pid..
The --reload option looks for changes to the source code not config. And --reload shouldn't be used in production anyway.
I would either:
1) Tell gunicorn to write a pid file with --pid /path/to/pid/file and then get salt to kill the pid followed by a restart.
2) Get salt to run a pkill gunicorn followed by a restart.
Don't run shell commands to manage services, use service states.
/path/to/nginx.conf:
file.managed:
# ...
/path/to/config.ini:
file.managed:
# ...
nginx:
service.running:
- enabled: true
- watch:
- file: /path/to/nginx.conf
django-app:
service.running:
- enabled: true
- reload: true
- watch:
- file: /path/to/config.ini
You may need to create a service definition for gunicorn yourself. Here's a very basic systemd example:
[Unit]
Description=My django app
After=network.target
[Service]
Type=notify
User=www-data
Group=www-data
WorkingDirectory=/path/to/source
ExecStart=/path/to/venv/bin/python gunicorn project.wsgi:application
ExecReload=/bin/kill -s HUP $MAINPID
KillMode=mixed
[Install]
WantedBy=multi-user.target
I'm trying to run zabbix-agent 3.0.4 on CentOS7, systemd failed to start the zabbix agent, from journalctl -xe
PID file /run/zabbix/zabbix_agentd.pid not readable (yes?) after start.
node=localhost.localdomain type=SERVICE_START msg=audit(1475848200.601:17994): pid=1 uid=0 auid=4294967298 ses=...
zabbix-agent.service never wrote its PID file. Failing.
Failed to start Zabbix Agent.
There is no permission error, and I try to re-configure the PID path to /tmp folder in zabbix-agent.service and zabbix_agentd.conf, it doesn't work.
Very weird, anyone has idea? Thank you in advance.
=====
Investigating a little bit, the PID should be under /run/zabbix folder, I create manually the zabbix_agentd.pid, and it disappears after 1 second. Really weird.
I had the same issue and it was related to selinux. So I allowed zabbix_agent_t via semanage
yum install policycoreutils-python
semanage permissive -a zabbix_agent_t
Giving the full permissions 7777 to that pid file will help to resolve the issue.
i had this too and it was Selinux, it was disabled but i had to
run the command
That's work for me.
Prerequisites: Centos 7, zabbix-server 3.4 and zabbix-agent 3.4 runing on same host.
Solution steps:
Install zabbix-server and zabbix-agent (no matter how - via yum or building from source code).
Check first if there is already separate users exist in /etc/passwd. If there is already zabbix users exist go to p.4.
Create separate groups and users for zabbix-server and zabbix-agent.
Example (you can specify usernames on your desire):
groupadd zabbix-agent
useradd -g zabbix-agent zabbix-agent
groupadd zabbix
useradd -g zabbix zabbix
Specify PID and LOG file location in Zabbix config files. Example:
For zabbix-server: in /etc/zabbix/zabbix_server.conf:
PidFile=/run/zabbix/zabbix_server.pid
LogFile=/var/log/zabbix/zabbix_server.log
For zabbix-agent: in /etc/zabbix/zabbix_agentd.conf:
PidFile=/run/zabbix-agent/zabbix-agent.pid
LogFile=/var/log/zabbix-agent/zabbix-agent.log
Create appropriate directories (if they haven't been creatred previously) as were specified in config files and change owners for this directories:
mkdir /var/log/zabbix-agent
mkdir /run/zabbix-agent
chown zabbix-agent:zabbix-agent /var/log/zabbix-agent
chown zabbix-agent:zabbix-agent /run/zabbix-agent
mkdir /var/log/zabbix
mkdir /run/zabbix
chown zabbix:zabbix /var/log/zabbix-agent
chown zabbix:zabbix /run/zabbix-agent
Check systemd config for zabbix services and add Username= and Group= in [Service] section under which services will run. Example:
For zabbix-server: /etc/systemd/system/multi-user.target.wants/zabbix-server.service:
[Unit]
Description=Zabbix Server
After=syslog.target
After=network.target
[Service]
Environment="CONFFILE=/etc/zabbix/zabbix_server.conf"
EnvironmentFile=-/etc/sysconfig/zabbix-server
Type=forking
Restart=on-failure
PIDFile=/run/zabbix/zabbix_server.pid
KillMode=control-group
ExecStart=/usr/sbin/zabbix_server -c $CONFFILE
ExecStop=/bin/kill -SIGTERM $MAINPID
RestartSec=10s
TimeoutSec=0
User=zabbix
Group=zabbix
[Install]
WantedBy=multi-user.target
For zabbix-agent: /etc/systemd/system/multi-user.target.wants/zabbix-agent.service:
[Unit]
Description=Zabbix Agent
After=syslog.target
After=network.target
[Service]
Environment="CONFFILE=/etc/zabbix/zabbix_agentd.conf"
EnvironmentFile=-/etc/sysconfig/zabbix-agent
Type=forking
Restart=on-failure
PIDFile=/run/zabbix-agent/zabbix-agent.pid
KillMode=control-group
ExecStart=/usr/sbin/zabbix_agentd -c $CONFFILE
ExecStop=/bin/kill -SIGTERM $MAINPID
RestartSec=10s
User=zabbix-agent
Group=zabbix-agent
[Install]
WantedBy=multi-user.target
If there is no such configs - you can find them in:
/usr/lib/systemd/system/
OR
Enable zabbix-agent.service service and thereby create symlink in /etc/systemd/system/multi-user.target.wants/ directory to /usr/lib/systemd/system/zabbix-agent.service
Run services:
systemctl start zabbix-server
systemctl start zabbix-agent
Check users under which services had been started (first column):
ps -aux | grep zabbix
or via top command.
Disable SELinux and Firewalld and you're good to go
Im trying to find some information on the correct way of setting up multiple django sites on a linode (Ubuntu 12.04.3 LTS (GNU/Linux 3.9.3-x86_64-linode33 x86_64)
Here is what I have now:
Webserver: nginx
Every site is contained in a .virtualenv
Django and other packages is installed using pip in each .virtualenv
RabbitMQ is installed using sudo apt-get rabbitmq, and a new user and vhost is created for each site.
Each site is started using supervisor script:
[group:<SITENAME>]
programs=<SITENAME>-gunicorn, <SITENAME>-celeryd, <SITENAME>-celerycam
[program:<SITENAME>-gunicorn]
directory = /home/<USER>/.virtualenvs/<SITENAME>/<PROJECT>/
command=/home/<USER>/.virtualenvs/<SITENAME>/bin/gunicorn <PROJECT>.wsgi:application -c /home/<USER>/.virtualenvs/<SITENAME>/<PROJECT>/server_conf/<SITENAME>-gunicorn.py
user=<USER>
autostart = true
autorestart = true
stderr_events_enabled = true
redirect_stderr = true
logfile_maxbytes=5MB
[program:<SITENAME>-celeryd]
directory=/home/<USER>/.virtualenvs/<SITENAME>/<PROJECT>/
command=/home/<USER>/.virtualenvs/<SITENAME>/bin/python /home/<USER>/.virtualenvs/<SITENAME>/<PROJECT>/manage.py celery worker -E -n <SITENAME> --broker=amqp://<SITENAME>:<SITENAME>#localhost:5672//<SITENAME> --loglevel=ERROR
environment=HOME='/home/<USER>/.virtualenvs/<SITENAME>/<PROJECT>/',DJANGO_SETTINGS_MODULE='<PROJECT>.settings.staging'
user=<USER>
autostart=true
autorestart=true
startsecs=10
stopwaitsecs = 600
[program:<SITENAME>-celerycam]
directory=/home/<USER>/.virtualenvs/<SITENAME>/<PROJECT>/
command=/home/<USER>/.virtualenvs/<SITENAME>/bin/python /home/<USER>/.virtualenvs/<SITENAME>/<PROJECT>/manage.py celerycam
environment=HOME='/home/<USER>/.virtualenvs/<SITENAME>/<PROJECT>/',DJANGO_SETTINGS_MODULE='<PROJECT>.settings.staging'
user=<USER>
autostart=true
autorestart=true
startsecs=10
Question 1: Is this the correct way? Or is it a better way to do this?
Question 2: I have tried to install celery flower, but how does that work with multiple sites? Do I need to install one flower-package for each .virtualenv, or could I use one install for every site? How do I setup nginx to display the flower-page(s) on my server?
Answer 1
There are - as so often :) - several ways to go. We do set it up in a similar way.
For the supervisor configuration I would suggest to use a little bit less verbose way, below an example for running web/tasks for 'example.com':
/etc/supervisor/conf.d/example.com.conf
(we usually have the config files in the repository as well, and just symlink them. So this file could be a symlink to:
/var/www/example.com/conf/supervisord.conf )
[group:example.com]
programs=web, worker, cam
[program:web]
command=/srv/example.com/bin/gunicorn project.wsgi -c /var/www/example.com/app/gunicorn.conf.py
directory=/var/www/example.com/app/
user=<USER>
autostart=true
autorestart=true
redirect_stderr=True
stdout_logfile_maxbytes=10MB
stdout_logfile_backups=5
stdout_logfile=/var/log/apps/web.example.com.log
[program:worker]
command=/srv/example.com/bin/celery -A project worker -l info
directory=/var/www/example.com/app/
user=<USER>
autostart=true
autorestart=true
redirect_stderr=True
stdout_logfile_maxbytes=10MB
stdout_logfile_backups=5
stdout_logfile=/var/log/apps/web.example.com.log
[program:flower]
command=/srv/example.com/bin/celery flower -A project --broker=amqp://guest:guest#localhost:5672//example.com/ --url_prefix=flower --port 5001
directory=/var/www/example.com/app/
...
So you have less to type and it is easier to read..
# restart all 'programs'
supervisorctl restart example.com:*
# restart web/django
supervisorctl restart example.com:web
etc.
Answer 2
Not totally sure if it is the best way, but what I would do here (and usually do):
run flower separately for every app (see config above)
with respective vhost (and url_prefix)
ad nginx reverse-proxy (directory with same name as url_prefix)
/etc/nginx/sites-enabled/example.conf
server {
...
location /flower {
proxy_pass http://127.0.0.1:5001;
...
Access the flower interface at example.com/flower
I can this setup to work if I start gunicorn manually or if I add gunicorn to my django installed apps. But when I try to start gunicorn with systemd the gunicorn socket and service start fine but they don't serve anything to Nginx; I get a 502 bad gateway.
Nginx is running under the "http" user/group, chroot jail. I used pythonbrew to setup the virtualenvs so gunicorn is installed in my home directory under .pythonbrew. The vitualenv directory is owned by my user and the adm group.
I'm pretty sure there is a permission issue somewhere, because everything works if I start gunicorn but not if systemd starts it. I've tried changing the user and group directives inside the gunicorn.service file, but nothing worked; if root start the server then I get no errors and a 502, if my user starts it I get no errors and 504.
I have checked the Nginx logs and there are no errors, so I'm sure it's a gunicorn issue. Should I have the virtualenv in the app directory? Who should be the owner of the app directory? How can I narrow down the issue?
/usr/lib/systemd/system/gunicorn-app.service
#!/bin/sh
[Unit]
Description=gunicorn-app
[Service]
ExecStart=/home/noel/.pythonbrew/venvs/Python-3.3.0/nlp/bin/gunicorn_django
User=http
Group=http
Restart=always
WorkingDirectory = /home/noel/.pythonbrew/venvs/Python-3.3.0/nlp/bin
[Install]
WantedBy=multi-user.target
/usr/lib/systemd/system/gunicorn-app.socket
[Unit]
Description=gunicorn-app socket
[Socket]
ListenStream=/run/unicorn.sock
ListenStream=0.0.0.0:9000
ListenStream=[::]:8000
[Install]
WantedBy=sockets.target
I realize this is kind of a sprawling question, but I'm sure I can pinpoint the issue with a few pointers. Thanks.
Update
I'm starting to narrow this down. When I run gunicorn manually and then run ps aux|grep gunicorn then I see two processes that are started: master and worker. But when I start gunicorn with systemd there is only one process started. I tried adding Type=forking to my gunicorn.services file, but then I get an error when loading service. I thought that maybe gunicorn wasn't running under the virtualenv or the venv isn't getting activated?
Does anyone know what I'm doing wrong here? Maybe gunicorn isn't running in the venv?
I had a similar problem on OSX with launchd.
The issue was I needed to allow for the process to spawn sub processes.
Try adding Type=forking:
[Unit]
Description=gunicorn-app
[Service]
Type=forking
I know this isn't the best way, but I was able to get it working by adding gunicorn to the list of django INSTALLED_APPS. Then I just created a new systemd service:
[Unit]
Description=hack way to start gunicorn and django
[Service]
User=http
Group=http
ExecStart=/srv/http/www/nlp.com/nlp/bin/python /srv/http/www/nlp.com/nlp/nlp/manage.py run_gunicorn
Restart=always
[Install]
WantedBy=multi-user.target
There must be a better way, but judging by the lack of responses not many people know what that better way is.