Icecast2 with Ices 0.4 - shutsdown questions - mp3

My Icecast2 with Ices 0.4 shutsdown at the end of the playlist. How do i make it restart at top of playlist automatically? Also I use this command to make it start /usr/local/bin/ices -c /etc/ices/ices.conf -v , how do i make it start automatically upon server restart and - or after crash?

I usually use "supervisor" to make sure something is running or gets restarted after stop or a crash.
Install with apt install supervisor, then create a config /etc/supervisor/conf.d/ices.conf with something like this:
[program:ices]
command=/usr/local/bin/ices -c /etc/ices/ices.conf -v
user=icecast
autostart=true
autorestart=true
stdout_logfile=/dev/null
redirect_stderr=true
stopsignal=QUIT
it should do the job.

Related

Attempting to restart Celery processes via Supervisor results in error

I am running supervisor/celery on an amazon aws server. Attempting to deploy a new application version eventually fails because the celery processes are not started. I have taken a look at the supervisord.conf file to ensure that the programs are included, which they are. At the end of the supervisord.conf file I have the following include:
[include]
files=celeryd.conf
files=flower.conf
I try to restart celery with
sudo /usr/local/bin/supervisorctl -c /opt/python/etc/supervisord.conf restart celeryd-default celeryd-slowtasks
celeryd-defualt and celeryd-slowtaks being the names of the programs listed in celeryd.conf. I get the following error:
celeryd-default: ERROR (no such process)
celeryd-slowtasks: ERROR (no such process)
celeryd-default: ERROR (no such process)
celeryd-slowtasks: ERROR (no such process)
If I run
sudo /usr/local/bin/supervisorctl -c /opt/python/etc/supervisord.conf restart all
I get
flower: stopped
httpd: stopped
httpd: started
flower: started
without any mention of celery. Any idea how to start figuring this issue out?
Check /opt/python/etc/supervisord.conf, you are probably including a folder that you don't expect to be included.
Also ensure that the instance of supervisor that is running is actually using the config file you ex

Keep running golang project even if console closed

How to keep golang project running even if console (putty) is closed. I have REST API developed in golang and hosted on AWS and using putty to connect and run the project
following command are used to install and run the project ( myapi )
go install myapi
myapi
when I close putty it stops working.
You have a number of options to keep your process running. The easiest of which is to use the nohup command.
$ nohup myapi &
The above command should run your application and print the output to a file called nohup.out. This file will be located in the directory where you run the command. Another option is to use screen or tmux.
If you want to start running your project in a more production ready way, you should look into service managers like systemd.
You can use something like supervisord
Run your program as a non-privileged user and use the setcap utility to grant it the needed permissions.
For example, to allow binding to a low port number (like 80) run will need to run setcap once on the executable:
sudo setcap 'cap_net_bind_service=+ep' /opt/yourGoBinary
You may need to install setcap: sudo aptitude install libcap2-bin
Alternatively
Debian comes with a tool called start-stop-daemon which is a standard way for starting daemons in init.d scripts. It can also also put the process in background for you if the program does not do it on its own. Have a look at the --background option.
Use /etc/init.d/skeleton as the basis of your init script, but change the do_start() function as follows:
start-stop-daemon --start --quiet --pidfile $PIDFILE --make-pidfile \
--background --exec $DAEMON --test > /dev/null \
|| return 1
start-stop-daemon --start --quiet --pidfile $PIDFILE --make-pidfile \
--background --exec $DAEMON -- $DAEMON_ARGS \
|| return 2
Also above added the --make-pidfile option which creates the PID file for you.
In case you need to switch to a different user in a secure way, there is also --chuid option.
On Ubuntu and RHEL/CentOS/SL 6.X the simplest way is to write an upstart job configuration file. Just put exec /usr/sbin/yourprogram in the /etc/init/yourprogram.conf configuration file. With upstart there is no need to force the program in background. Do not add expect fork or expect daemon which you need with traditional daemons. With upstart it is better if the process does not fork.

Puma - Rails on linux // Restart when process dies

Using puma on a rails app; it sometimes dies without any articular reason; also often dies (does not restart after being stopped) when deployed
What would be a good way to monitor if the process died, and restart it right way ?
Being called within a rails app; I'd be useful to have a way to defines it for any apps.
I did not found any useable ways to do it (looked into systemd, other linux daemons… no success)
Thanks if any feedback
You can use puma control to start/stop puma server. If you know where puma.pid file placed (for Mac it's usually "#{Dir.pwd}/tmp/pids/puma.pid") you could do:
bundle exec pumactl -P path/puma.pid stop
To set pid file path or to other options (like daemonizing) you could create puma config. You can found an example here. And then start and stop server just with config file:
bundle exec pumactl -F config/puma.rb start
You can also restart and check status in this way:
bundle exec pumactl -F config/puma.rb restart
bundle exec pumactl -F config/puma.rb status

sudo /etc/init.d/celeryd start generates a "Unknown command: 'celeryd_multi'"

I'm setting up celery to run daemonized, using the variables from my virtual environment. But when I run $ sudo /etc/init.d/celeryd start, I get Unknown command: 'celeryd_multi' Type 'manage.py help' for usage.
I have set the following:
CELERYD_CHDIR="/home/myuser/projects/myproject"
ENV_PYTHON="/home/myuser/.virtualenvs/myproject/bin/python"
CELERYD_MULTI="$ENV_PYTHON $CELERYD_CHDIR/manage.py celeryd_multi"
When I run $ /home/myuser/.virtualenvs/myproject/bin/python /home/myuser/projects/myproject/manage.py celeryd_multi from the command line, it works fine.
Any ideas? I will gladly post any other code you need :)
Thank you!
Maybe you just set a wrong DJANGO_SETTINGS_MODULE:
try: DJANGO_SETTINGS_MODULE="settings" <-> DJANGO_SETTINGS_MODULE="project.settings"
The problem here is that when you run it as your user, virtualenv already has proper environment activated for your user "myuser" and it pulls packages from /home/myuser/.virtualenvs/myproject/...
When you do sudo /etc/init.d/celeryd start you are starting celery as root which probably doesn't have virtualenv activated in /root/.virtualenvs/ if such a thing even exists and thus it looks for python packages in /usr/lib/... where your default python is and consequently where your celery is not installed.
Your options are to either:
Replicate the same virtualenv under root user and start it like you tried with sudo
Keep virtualenv where it is and start celery as your user "myuser" (no sudo) without using init scripts.
Write a script that will su - myuser -c /bin/sh /home/myuser/.virtualenvs/myproject/bin/celeryd to invoke it from init.d as a myuser.
Install supervisor outside of virtualenv and let it do the dirtywork for you
Thoughts:
Avoid using root for anything you don't have to.
If you don't need celery to start on boot then this is fine, wrapped in a script possibly.
Plain hackish to me, but works if you don't want to invest additional 30min to use something else.
Probably best way to handle ALL of your python startup needs, highly recommended.

Issues with celery daemon

We're having issues with our celery daemon being very flaky. We use a fabric deployment script to restart the daemon whenever we push changes, but for some reason this is causing massive issues.
Whenever the deployment script is run the celery processes are left in some pseudo dead state. They will (unfortunately) still consume tasks from rabbitmq, but they won't actually do anything. Confusingly a brief inspection would indicate everything seems to be "fine" in this state, celeryctl status shows one node online and ps aux | grep celery shows 2 running processes.
However, attempting to run /etc/init.d/celeryd stop manually results in the following error:
start-stop-daemon: warning: failed to kill 30360: No such process
While in this state attempting to run celeryd start appears to work correctly, but in fact does nothing. The only way to fix the issue is to manually kill the running celery processes and then start them again.
Any ideas what's going on here? We also don't have complete confirmation, but we think the problem also develops after a few days (with no activity this is a test server currently) on it's own with no deployment.
I can't say that I know what's ailing your setup, but I've always used supervisord to run celery -- maybe the issue has to do with upstart? Regardless, I've never experienced this with celery running on top of supervisord.
For good measure, here's a sample supervisor config for celery:
[program:celeryd]
directory=/path/to/project/
command=/path/to/project/venv/bin/python manage.py celeryd -l INFO
user=nobody
autostart=true
autorestart=true
startsecs=10
numprocs=1
stdout_logfile=/var/log/sites/foo/celeryd_stdout.log
stderr_logfile=/var/log/sites/foo/celeryd_stderr.log
; Need to wait for currently executing tasks to finish at shutdown.
; Increase this if you have very long running tasks.
stopwaitsecs = 600
Restarting celeryd in my fab script is then as simple as issuing a sudo supervisorctl restart celeryd.