Problems configuring <on-connect> in icecast.xml - icecast

I have defined the following mountpoint in the icecast.xml project:
<mount type="normal">
<mount-name>/data.ogg</mount-name>
.....
<on-connect>sh /bin/stream-start.sh</on-connect>
</mount>
And defined a stream.sh script in /bin/stream-start.sh.
It is supposed that when http://..../data.ogg request is executed the stream-start.sh must be executed but is not executed. I have now the following questions:
How must the on-connect script be defined (/bin/stream-start or /bin/stream-start.sh)
how can you pass parameters to the starting script.

In general you'll find it helpful to examine the Icecast logs. Both access.log and error.log may contain important information. Also it might be helpful to adjust loglevel up and restart Icecast for it to take effect.
https://icecast.org/docs/icecast-2.4.1/config-file.html#log
on-connect
State a program that is run when the source is started. It is passed a parameter which is the name of the mountpoint that is starting. The processing of the stream does not wait for the script to end.
Caution should be exercised as there is a small chance of stream file descriptors being mixed up with script file descriptors, if the FD numbers go above 1024. This will be further addressed in the next Icecast release.
This option is not available on Win32
(emphasis mine)
https://icecast.org/docs/icecast-2.4.1/config-file.html#mountsettings
Please also note that you can not rely on 'the usual' environment variables of an interactive shell being present, as e.g. PATH will not be populated. You might want to just export >/tmp/on-connect-env.txt from within the script and examine its contents to get an idea what you'll work with. Also you can not pass the interpreter as part of the command like you did above, you must put the interpreter with its full path in the shebang (#!) on the first line of the script.

Related

How can I do code-path analysis in a debugger?

Are there any debuggers, tools, gdb-scripts that can be used to do code-path analysis?
Say I have an executable (in C++, but the question is not language restricted) that runs fine with one input and crashes with another. I would like to see the difference between the two execution paths, without having to step (or instrument) through potentially thousands of lines of code.
Ideally, I would be able to compare between 2 different streams of (C++) statements (preferably not assembler) and pinpoint the difference(s). Maybe a certain if-branch is taken in one execution and not the other, etc.
Is there a way to achieve / automate that? Thanks in advance.
So, provided the source of the bug could be located in one (or a few) source files, the simplest way to achieve comparative code-execution paths seems to be GDB scripting. You create a gdb script file:
set args <arg_list>
set logging off
set logging file <log_file_1>
set logging on
set pagination off
set breakpoint pending on
b <source_file>:<line_1>
commands
frame
c
end
...
b <source_file>:<line_n>
commands
frame
c
end
with a preamble (all the set commands) and then breakpoint + command for each line in the source file (which can be easily generated by a script; don't worry about blank or commented lines, they will be skipped).
Load the executable in gdb (properly built with debug flags, of course); source the gdb script file above (call it gdb_script.txt) and run:
source gdb_script.txt
run
Then repeat the process above with a slightly changed script file (gdb_script.txt). Specifically, change the <arg_list> to modify the input; and set logging file to a different file <log_file_2>.
Source and run. Then compare <log_file_1> vs. <log_file_2> with your preferred diffing tool (say, tkdiff).
This will not do a better job than gcov (suggested above). But, it can help better restrict your output to the suspicious region of code.

Start/Stop daemon on Linux via C++ code

I am trying to find out a way to launch a custom daemon from my program. The daemon itself is implemented using double-forking mechanism and works fine if launched directly.
So far I have come across various ways to start a daemon:
Create an init script and install it to init.d directory.
Launch the program using start-stop-daemon command.
Create .desktop file and place in one of the autostart paths.
While the 1st 2 methods are known to start the service using command line, the 3rd method is for autostarting the service (or any other application) at user login.
So far my guess is that the program can be executed directly using exec() family of functions, or the 'start-stop-daemon' command can be executed via system() function.
Is there a better way to start/stop service?
Generally startups are done from shell scripts that would call your C++ program which would then do its double fork. Note that it should also close unneeded file descriptors, use setsid() and possibly setpgid/setpgrp (I can't remember if these apply to Linux too), possibly chdir("/"), etc. There are a number of fairly normal things to do which are described in the Stevens book - for more info see http://software.clapper.org/daemonize/daemonize.html
If the daemon is supposed to run with root or other system user account, then the system /etc/init/ or /etc/init.d/ mechanisms are appropriate places to have scripts to stop|start|status|etc your daemon.
If the deamon is supposed to be for the user, and run under his/her account, you have a couple of options.
1) the .desktop file - I'm not personally a fan, but if it also does something for you on logging out (like let you trigger shutting down your daemon), it might be viable.
2) For console logins, the ~/.bash_login and ~/.bash_logout - you can have these run commands supported by your daemon's wrapper to start it and (later) shut it down. The latter can be done by saving the PID in a file or having the .bash_login keep it in a variable the .bash_logout will use later. This may involve some tweaking to make sure the two scripts get run, once each, by the outermost login shell only (normal .bashrc stuff stays in the .bashrc, and .bash_login would need to read it in for the login shell before starting the daemon, so the PATH and so on would be set up by then).
3) For graphic environments, you'd need to find the wrapper script from which things like your X window manager are run. I'm using lightdm, and at some point /etc/X11/Xsession.d/40x11-common_xsessionrc ends up running my ~/.xsessionrc which gives me a hook to startup anything I want (I have it run my ~/.xinitrc which runs my window manager and everything), as well as the place to shot everything down later. The lack of standardization for giving control to the user makes finding the hook pretty annoying, since just using a different login manager (e.g. lightdm versus gdb) can change where the hook is.
4) A completely different approach is to just have the user's crontab start up the daemon. Run "man 5 crontab" and look for the special #reboot option to have tasks run at boot. I haven't used it myself - there's a chance it's root restricted, but it's easy to test and you need only contemplate having your daemon exist gracefully (and quickly) at system shutdown when the system sends it a SIGTERM signal (see /etc/init.d/sendsigs for details).
Hope something from that helps.

GDB scripting - execute command only if not debugging core file

I'm adding some features I find useful to my GDB startup script. A few of the startup commands apply only to "live" targets, or have components that make sense only with live targets. I'd like to be able to test for the presence (or absence) of a core file, and skip or amend these commands as appropriate.
I looked around in the Python API, but couldn't find anything that tells me whether an inferior is a core file or a live program. I'm fine with a scripting solution that works in either GDB itself or in the Python GDB scripting interface.
It doesn't look like there is a way to do that.
I'd expect an attribute on gdb.Inferior, but there isn't one.
File a feature request in GDB bugzilla.
info proc status returns "unable to handle request" for core files, whereas for a live process it returns several lines, the first of which looks like: "process 1234".
You can run that command and compare its first output line against that string using the execute_output() function from here: https://github.com/crossbowerbt/GDB-Python-Utils/blob/master/gdb_utils.py

How to return a command from a c++ application to parent terminal?

Is it possible to run a c++ application from a terminal and on certain conditions return a command back into the terminal from which it was called from? For instance, if I were to run an application within my terminal and after my selections; my application needs to change my PATH by running an export command such as:
(USING BASH)
export PATH=.:/home/User/application/bin:$PATH
After I'm done and before my application completely closes can I make the application change my terminals local environment variables with the above command? Does Qt offer a way of doing this? Thanks in advance for any help!
No, you cannot change parent application environment.
Why? When your parent app started yours (probably using system()), it actually fork()ed - child process was born as to be almost exact replica of parent, and then that child used execve() call, which completely replaced executable image of that process with executable image of your application (for scripts it would be image of interpreter like bash).
In addition to that, that process also prepared few more things. One is list of open files, starting with file handles 0,1,2 (stdin, stdout, stderr). Also, it created memory block (which belongs to child process address space) which contains environment variables (as key=value pairs).
Because environment block belongs to your process, you can change your own environment as you please. But, it is impossible for your process to change environment memory block of parent (or any other process for that matter). The only way to possibly achieve this would be to use IPC (inter-process communication) and gently ask parent to do this task inside of it, but parent must be actively listening (on local or network socket) and be willing to fulfill such request from somebody, and child is not any special compared to any other process in that regard.
This also reason why you can change environment in bash using some shell script, but ONLY using source or . bash macro - because it is processed by bash itself, without starting any external process.
However, you cannot change environment by executing any other program or script for reasons stated above.
The common solution is to have your application print the result to standard output, then have the invoker pass it to its environment. A textbook example is ssh-agent which prints an environment variable assigment; you usually invoke it with eval $(ssh-agent)

Ok to write to stdout on Unix process without terminal?

I want to be sure that the following will not compromise my process:
The Solaris program writes heavily to stdout (via C++ wcout stream). The output serves for tracing, so during testing and analyisis the programmer/tester can easily observe what happens. But the program is actually a server process, so in the production version it will run as a demon without attached console and write all the trace output to files.
I assume that stdout is redirected to nul for a program without console, in this case I guess all is fine. However I want to be sure that the stdout output is not buffered somewhere such that after sufficient run-time we could have memory or disk space problems.
Note: we cannot redirect the trace output to a file because this would grow too large. Instead our own file tracing mechanism makes sure that new files are created and old ones deleted to always keep a certain amount of tracing and not more.
That depends how the daemon is started, I guess. When the daemon process is created, the streams have to be taken care of somehow (for example, they need to be detached from the current process, least the daemon would have to be terminated when the shell from which it was started manually exits).
It depends on how the daemon is started. If it's started as a cron job,
the output will be captured and mailed to whoever owns the crontab
entry, unless you redirect the output in the command line. (But
programs started as cron jobs aren't truly daemons.)
More generally, all processes are started from another program (except
the init processes); most of the time, that program is a shell (even
crontab invokes a shell to start its jobs), and the command is given
as a command line. And you can redirect the output anywhere you please
in a command line; /dev/null is a popular choice for cases like yours.
Most daemons are started from an rc file; a shell script installed
under /etc/rcn.d. Just redirect your output there.
Or better yet, rewrite your code to use some form of rotating logs,
instead of standard out.