I have the pid (process id) of a process. This process is actually a bash script. The bash script has some sub-processes (which ones are launched via $(command)). I would like to get the list of the pids of these sub-processes.
pstree (https://github.com/FredHucht/pstree/blob/main/pstree.c) is showing them nicely, so it should be possible to achieve this.
One example:
% pstree 24878
-+- 24878 root /path/to/a/bash/script.sh inputX
\-+- 24879 myuser /path/to/an/executable/launched/by/above/script.sh inputY
I am currently working on extracting the part from pstree to achieve this, but if there is an existing, already working solution/3rd party library, it would be easier to just simply use that. (So far I checked the Boos and the Poco libraries, but none of them offers this.)
I am under MacOS, but a multi-platform solution would be even better.
Related
I have a C++ program from which I want to execute multiple commands in a shell.
My current solution use the system() function and looks like this:
return_value = system(SETUP_ENVIRONMENT; RUN_USEFUL_APP_1);
... do_something_else ...
return_value = system(SETUP_ENVIRONMENT; RUN_USEFUL_APP_2);
... do_something_else ...
return_value = system(SETUP_ENVIRONMENT; RUN_USEFUL_APP_3);
...
It works, but SETUP_ENVIRONMENT takes a few seconds making the program really slow. But I have to run it every time since system() runs in a new shell each time.
I want to be able to setup my shell once and then run all commands in it.
execute_in_shell(SETUP_ENVIRONMENT);
return_value = execute_in_shell(RUN_USEFUL_APP_1);
... do_something_else ...
return_value = execute_in_shell(RUN_USEFUL_APP_2);
... do_something_else ...
return_value = execute_in_shell(RUN_USEFUL_APP_3);
...
How do I do that?
I'm on Linux.
Alternatively to answer 1, you could also use your program to create a shell script which will run all your useful programs and execute this script at once. Then the shell won't be started each time for each particular useful program.
You have three reasonable options for doing this, depending on your specific need.
If the various calls you make to external tools are part of coherent routine, then you can – and probably should – follow #dmi's advice and write a short shell script that you can call from your C++ program.
If you instead need to start procedures here and there, you might be interested into running the shell as an inferior process and attaching your program to it – so that instead of talking with your terminal, the shell process talks to your C++ program.
This method is not very difficult but has a few gotchas (for instance, some programs like ssh, sudo or docker may expect to be attached to a tty). It is very well covered in most introductions to system programming (look for inter process communication and subprocesses) for any Unix variant. Let me outline that procedure:
use the pipe system call to create pipes (stdin_r, stdin_w)
use the pipe system call to create pipes (stdout_r, stdout_w)
use the pipe system call to create pipes (stderr_r, stderr_w)
use the fork system call to duplicate your program
In the child, you close stdin_w, stdout_r, stderr_r, and use the
exec system call parametrised by stdin_r, stdout_w, stderr_w to
run the shell.
In the parent, you close stdin_r, stdout_w, stderr_w, and you
can now write commands in stdin_w, and read command output from
stdout_r and stderr_r.
(This intentionally very sketchy, I included the outline only so that you are sure you found the right place in your favourite textbook).
There are third party libraries implementing all that low-level stuff for you. You can use boost::process (which is not yet an official part of boost now) whose usage is illustrated with a full tutorial. There are plenty of alternatives such as pstreams.
The third option would be to avoid using the shell and executing directly shell commands you use. This is the approach followed by Rashell, an OCaml library defining primitives allowing to reliably compose sub-processes, which you can use for your own inspiration.
I want my program to download some audio file from a link I give it and save it.
I know this can be easily done in the command line using curl (for instance: curl -A "Mozilla" "www.example.com" > hello.mp3
I saw examples where system() was used to run curl (i.e it looked something like system(curl -A "Mozilla" "www.example.com" > hello.mp3) . Even though this is an easy solution it seems bad to me.
Would it be better practice to write an equivalent code using the matching library (libcurl in this case)?
What do you guys think?
P.S - This is a general question in a sense. What I mean by that is that there are many command line programs which can be run by system() to get a fast and easy result. The question is if it's okay to use this method to achieve it.
Yes, it would be better to use libcurl directly. That's what it exists for.
That way, you avoid:
the cost of a system call
the cost of spawning a new process
potential security-related bugs in your system call
Invoking curl from the shell will basically just spawn a new shell and new process for no reason, then go ahead and use libcurl inside that process anyway. Cut out the middle man.
I am trying to find out a way to launch a custom daemon from my program. The daemon itself is implemented using double-forking mechanism and works fine if launched directly.
So far I have come across various ways to start a daemon:
Create an init script and install it to init.d directory.
Launch the program using start-stop-daemon command.
Create .desktop file and place in one of the autostart paths.
While the 1st 2 methods are known to start the service using command line, the 3rd method is for autostarting the service (or any other application) at user login.
So far my guess is that the program can be executed directly using exec() family of functions, or the 'start-stop-daemon' command can be executed via system() function.
Is there a better way to start/stop service?
Generally startups are done from shell scripts that would call your C++ program which would then do its double fork. Note that it should also close unneeded file descriptors, use setsid() and possibly setpgid/setpgrp (I can't remember if these apply to Linux too), possibly chdir("/"), etc. There are a number of fairly normal things to do which are described in the Stevens book - for more info see http://software.clapper.org/daemonize/daemonize.html
If the daemon is supposed to run with root or other system user account, then the system /etc/init/ or /etc/init.d/ mechanisms are appropriate places to have scripts to stop|start|status|etc your daemon.
If the deamon is supposed to be for the user, and run under his/her account, you have a couple of options.
1) the .desktop file - I'm not personally a fan, but if it also does something for you on logging out (like let you trigger shutting down your daemon), it might be viable.
2) For console logins, the ~/.bash_login and ~/.bash_logout - you can have these run commands supported by your daemon's wrapper to start it and (later) shut it down. The latter can be done by saving the PID in a file or having the .bash_login keep it in a variable the .bash_logout will use later. This may involve some tweaking to make sure the two scripts get run, once each, by the outermost login shell only (normal .bashrc stuff stays in the .bashrc, and .bash_login would need to read it in for the login shell before starting the daemon, so the PATH and so on would be set up by then).
3) For graphic environments, you'd need to find the wrapper script from which things like your X window manager are run. I'm using lightdm, and at some point /etc/X11/Xsession.d/40x11-common_xsessionrc ends up running my ~/.xsessionrc which gives me a hook to startup anything I want (I have it run my ~/.xinitrc which runs my window manager and everything), as well as the place to shot everything down later. The lack of standardization for giving control to the user makes finding the hook pretty annoying, since just using a different login manager (e.g. lightdm versus gdb) can change where the hook is.
4) A completely different approach is to just have the user's crontab start up the daemon. Run "man 5 crontab" and look for the special #reboot option to have tasks run at boot. I haven't used it myself - there's a chance it's root restricted, but it's easy to test and you need only contemplate having your daemon exist gracefully (and quickly) at system shutdown when the system sends it a SIGTERM signal (see /etc/init.d/sendsigs for details).
Hope something from that helps.
I am trying to create a c++ daemon that runs on a Red Hat 6.3 platform and am having trouble understanding the differences between the libc daemon() call, the daemon shell command, startproc, start-stop-daemon and about half a dozen other methods that google suggests for creating daemons.
I have seen suggestions that two forks are needed, but calling daemon only does one. Why is the second fork needed?
If I write the init.d script to call bash daemon, does the c code still need to call daemon?
I implemented my application to call the c daemon() function since it seems the simplest solution, but I am running into the problem of my environment variables seem to get discarded. How do I prevent this?
I also need to run the daemon as a particular user, not as root.
What is the simplest way to create a C++ daemon that keeps its environment variables, runs as a specific user, and is started on system boot?
Why is the second fork needed?
Answered in What is the reason for performing a double fork when creating a daemon?
bash daemon shell command
My bash 4.2 does not have a builtin command named daemon. Are you sure yours is from bash? What version, what distribution?
environment variables seem to get discarded.
I can see no indication to that effect in the documentation. Are you sure it is due to daemon? Have you checked whether they are present before, and missing after that call?
run the daemon as a particular user
Read about setresuid and related functions.
What is the simplest way to create a C++ daemon that keeps its environment variables, runs as a specific user, and is started on system boot?
Depends. If you want to keep your code simple, forget about all of this and let the init script do this via e.g. start-stop-daemon. If you want to handle this in your app, daemon combined with retresuid should be a good approach, although not the only one.
I'd like to use Bash to run a test suite automatically when I save any file in a given directory.
Is there a mechanism for bash to execute a given script on save events?
Thanks.
::EDIT::
I should have mentioned that I'm on using OSX.
Edited: you (the OP) mentioned you use OSX. I'm not aware of any similar tools on OSX. There is a low-level system call (inherited from BSD) called "kqueue", but you'd have to implement your own user-level tool. There is a sample application from Apple, called "Watcher", but it's proof of concept only, and doesn't do what you want.
There is another thread about this on Stack Overflow (also inconclusive).
For lack of an appropriate tool, if you're using a specific programming language, I'd advise you to look for solutions already written for it. Otherwise, I think you're stuck to polling and managing the changes yourself...
Here's my original, Linux-based answer, for archival purposes:
If you're using Linux, you might want to take a look at inotify . More specifically, you can install inotify-tools, which include inotifywait.
With it, you can monitor files and directories for a number of events, such as access, modification, opening, closing and many others. inotifywait can exit once the specified event has been detected, and so a simple loop would get you what you want:
while :; do
inotifywait -e modify /some/directory
run_test_suite
done
By the way, many programming languages and environments already have their own continuous test runners (for instance, with Python you could use tdaemon, among others).
You can use incron to detect when a file has been closed.
Use dnotify only if inotify is not available in Your system (linux kernel < 2.6.13).
dnotify is standard linux method to watch directories:
http://en.wikipedia.org/wiki/Dnotify
Code example. Watch dir for changes:
dnotify WATCH_DIR -M -e SCRIPT_TO_EXECUTE
Please note SCRIPT_TO_EXECUTE will be executed every time, when any file in WATCH_DIR changes.
You can use inotify as explained here Inotify Example: Introduction to Inotify with a C Program Example