I have a python script who sends a SIGHUP signal to another process, unrelated with my script (not a child, not alterable). When the process receives the SIGHUP, it starts a "light restart", reloading configuration file and updating information.
The "restart" doesn't stop the process, so I can't wait for an exit code. I know I can look at the process log file to know if the signal is handled, but that's too heavy and too slow for the script flow.
I would like to know if I can use another method to be warned that the SIGHUP have been received by my process?
The inotify-tools C library may solve your problem. Its installation gives you access to new commands: inotifywait and inotifywatch.
Let's say you have this /tmp/foo.bar file. You can start watching for any read access on this file with the following command:
inotifywait --event access /tmp/foo.bar
Then, do a cat /tmp/foo.bar and the program will return.
As I said, it's a C library and I guess there are other language implementations of it. So feel free to not use this Bash example and write you own program using this library.
Related
I am trying to run multiple command in ubuntu using c++ code at the same time.
I used system() call to run multiple command but the problem with system() call is it invoke only one command at a time and rest commands are in waiting.
below I wrote my sample code, may this help you to get what I am trying to do.
major thing is I want to run all these command at a time not one by one. Please help me.
Thanks in advance.
main()
{
string command[3];
command[0]= "ls -l";
command[1]="ls";
command[2]="cat main.cpp";
for(int i=0;i<3;i++){
system(command[i].c_str());
}
}
You should read Advanced Linux Programming (a bit old, but freely available). You probably want (in the traditional way, like most shells do):
perhaps catch SIGCHLD (set the signal handler before fork, see signal(7) & signal-safety(7)...)
call fork(2) to create a new process. Be sure to check all three cases (failure with a negative returned pid_t, child with a 0 pid_t, parent with a positive pid_t). If you want to communicate with that process, use pipe(2) (read about pipe(7)...) before the fork.
in the child process, close some useless file descriptors, then run some exec function (or the underlying execve(2)) to run the needed program (e.g. /bin/ls)
call (in the parent, perhaps after having got a SIGCHLD) wait(2) or waitpid(2) or related functions.
This is very usual. Several chapters of Advanced Linux Programming are explaining it better.
There is no need to use threads in your case.
However, notice that the role of ls and cat could be accomplished with various system calls (listed in syscalls(2)...), notably read(2) & stat(2). You might not even need to run other processes. See also opendir(3) & readdir(3)
Perhaps (notably if you communicate with several processes thru several pipe(7)-s) you might want to have some event loop using poll(2) (or the older select(2)). Some libraries provide an event loop (notably all GUI widget libraries).
You have a few options (as always):
Use threads (C++ standard library implementation is good) to spawn multiple threads which each perform a system call then terminate. join on the thread list to wait for them all to terminate.
Use the *NIX fork command to spawn a new process, then within each child process use exec to execute the desired command (see here for an example of "getting the right string to the right child"). Parent process can use waitpid to determine when all children have finished running, in order to move on with the program.
Append "&" to each of your commands, which'll tell the shell to run each one in the background (specifically, system will start the process in the background then return, without waiting for the result). Not tried this, don't know if it'll work. You can't then wait for the call to terminate though (thanks PSkocik).
Just pointing out - if you run those 3 specific commands at the same time, you're unlikely to be able to read the output as they'll all print text to the terminal at the same time.
If you do require reading the output from within the program (though not mentioned in your question), this is relevant (although it doesn't use system).
When one is interactively using cmd.exe to run all sort of windows CLI application, one can easily stop them by pressing CTRL+C or CTRL+BREAK . this is implemented by signaling the process as can be read here. As for cmd.exe itself, it does not terminate in these conditions as can be explained in a comment of this question.
Now, consider the following scenario. My application open a cmd.exe using CreateProcess(), and the user has started another application b.exe through it. Say that my application want to fold before b.exe has ended , and it doesn't really care about the graceful termination of it. optimally, I'd like to mimic the user pressing CTRL+C and then send exit to the cmd.exe (let's say I can do it IO-wise). the windows api offers GenerateConsoleCtrlEvent() for that (almost) exact purpose, but it can be ignored by the process (cmd.exe in that case) and in particular , it won't forward the signal to b.exe.
How does GDB achieves the feat of attaching itself to a running procesS?
I need a similar capability, where i can issue CLI commands to a running process. For example, i can query the process internal state such as show total_messages_processed? How can i build support for issuing commands to a running process under linux?
Is there a library that can provide CLI communication abilities to a running process and can be extended for custom commands?
The process itself is written in c++
GDB doesn't use the CLI to communicate with its debugee; it uses the ptrace system call / API.
CLI means "command-line interface". The simplest form of communication between processes is stdin / stdout. This is achieved through pipes. For example:
ps -ef | grep 'httpd'
The standard output of ps (which will be a process listing) is connected to the standard input of grep, who will process that process listing output line-by-line.
Are you writing both programs, or you want to communicate with an already-existing process? I have no idea what "show total_messages_processed" means without context.
If you simply want the program to communicate some status, a good approach is that which dd takes: Sending the process the SIGUSR1 signal causes it to dump out its current stats to stderr and continue processing:
$ dd if=/dev/zero of=/dev/null&
[1] 19716
$ pid=$!
$ kill -usr1 $pid
$ 10838746+0 records in
10838746+0 records out
5549437952 bytes (5.5 GB) copied, 9.8995 s, 561 MB/s
Did you consider using AF_UNIX sockets in your process? or D-bus? or make it an HTTP server (e.g. using libonion or libmicrohttpd), perhaps for SOAP, or RCP/XDR
Read some books on Advanced Linux Programming, or Advanced Unix Programming; you surely want to use (perhaps indirectly) some multiplexing syscall like poll(2) perhaps above some event libary like libev. Maybe you want to dedicate a thread for that.
We cannot tell more without knowing what kind of process are you thinking of. You may have to redesign some part of it. If the process is some traditional compute-intensive thing it is not the same as a SMTP server process. In particular, if you have some event loop in the process, use & extend it for monitoring purposes. If you don't have any event loop (e.g. in a traditional number crunching "batch" application) you may need to add one.
In this case I'd suggest 'fork', which splits the currently running process into two. The parent process would read stdin, process the commands and be able to handle all memory that is shared between the two processes. One could theoretically even skip advanced forms of interprocess communication: locks, mutexes, semaphores, signals, sockets or pipes -- but be prepared that the child process has not necessarily written it's state to memory but keeps it in registers.
At fork Operating System makes a copy of the process local variables, after which each process have their own internal state -- thus the easiest method for passing data would be to allocate "shared memory".
One can also write a signal handler to the child process, that goes to sleep/wait state and exits only on another signal -- in that way one can have more time to inspect the child processes internal state. The main rationale for this kind of approach is that one doesn't have to make the process under debugging aware of being debugged: the parent and child processes share the same code base and it's enough for the parent process to implement necessary output methods (formatting to screen?) and serializing the data etc.
I have a binary build with -fprofile-arcs and -ftest-coverage. The binary is run by a process monitor which spawns the process as a child process. Then, when I want the process to exit, I have to go through the process monitor. It sends a SIGKILL to the process. I found out that .gcda files do not generate in this case. What can I do?
EDIT: Actually the process monitor first tries to make the process exit. However, the ProcessMonitor library (used in each process) calls _exit instead of exit when the user issues a command to stop the process. This is the cause of all trouble.
This might work:
http://nixcraft.com/coding-general/12544-gcov-g.html
In summary: call __gcov_flush() in the program, possibly in a signal handler or periodically during execution.
If C++ code remember to make a extern "C" declaration of the function.
Also remember to use some kind of preprocessor ifdef so that the program does not call it when not built with profiling.
SIGKILL is a "hard" kill signal, that cannot be caught by the application. Therefore, the app has no chance to write out the .gcda file.
I see two options:
Catch signals other than SIGKILL: any sensible process monitor should send a SIGTERM first. init and the batch managers I've encountered do this. SIGKILL is a last resort, so it should be sent only after SIGTERM followed by a grace period.
Workaround: run the program via an intermediate program that gets the SIGKILL; have the actual program check periodically (or in a separate thread) if its parent still lives, and if not, have it exit gracefully.
Afaik compilers (IntelC too) only store profiling stats in exit handler.
So what about somehow telling the process to quit, instead of killing it?
Like adding a SIGKILL handler maybe, with exit() in it?
I need to execute some commands via "/bin/sh" from a daemon. Some times these commands takes too long to execute, and I need to somehow interrupt them. The daemon is written in C++, and the commands are executed with std::system(). I need the stack cleaned up so that destructors are called when the thread dies. (Catching the event in a C++ exception-handler would be perfect).
The threads are created using boost:thread. Unfortunately, neither boost::thread::interrupt() or pthread_cancel() are useful in this case.
I can imagine several ways to do this, from writing my own version of system(), to finding the child's process-id and signal() it. But there must be a simpler way?
Any command executed using the system command is executed in a new process. Unfortunately system halts the execution of the current process until the new process completes. If the sub process hangs the new process hangs as well.
The way to get round this is to use fork to create a new process and call one of the exec calls to execute the desired command. Your main process can then wait on the child process's Process Id (pid). The timeout can be achieve by generating a SIGALRM using the alarm call before the wait call.
If the sub process times out you can kill it using the kill command. Try first with SIGTERM, if that fails you can try again will SIGKILL, this will certainly kill the child process.
Some more information on fork and exec can be found here
I did not try boost::process, as it is not part of boost. I did however try ACE_Process, which showed some strange behavior (the time-outs sometimes worked and sometimes did not work). So I wrote a simple std::system replacement, that polls for the status of the running process (effectively removing the problems with process-wide signals and alarms on a multi threading process). I also use boost::this_thread::sleep(), so that boost::thread::interrupt() should work as an alternative or in addition to the time-out.
Stackoverflow.com does not work very good with my Firefox under Debian (in fact, I could not reply at all, I had to start Windows in a VM) or Opera (in my VM), so I'm unable to post the code in a readable manner. My prototype (before I moved it to the actual application) is available here: http://www.jgaa.com/files/ExternProcess.cpp
You can try to look at Boost.Process:
Where is Boost.Process?
I have been waiting for a long time for such a class.
If you are willing to use Qt, a nice portable solution is QProcess:
http://doc.trolltech.com/4.1/qprocess.html
Of course, you can also make your own system-specific solution like Let_Me_Be suggests.
Anyway you'd probably have to get rid of the system() function call and replace it by a more powerful alternative.