I am creating a child-parent fork() to be able to communicate with a shell(/bin/sh) from the parent through a pipe.
The problem is:
In a parent I set a select() on a child output, but it unblocks only when the process is finished! So when I run say ps it's okay. but when I run /bin/sh it does not output until shell exits. But I want to read it's output!
for(;;) {
select(PARENT_READ+1,&sh,NULL,NULL,NULL); // This unblocks only when shell exits!
if (FD_ISSET(PARENT_READ,&sh)) {
while (n = read (PARENT_READ, &buf,30)) {
buf[30]='\0';
printf("C: %s\n",buf);
};
};
}
The answer is somewhere in the field of disabling buffering of pipes?
A lot of programs change their behavior depending on whether or not they think they're talking to a terminal (tty), and the shell definitely does this this. Also, the default C streams stdout and stderr are probably unbuffered if stdout is a tty, and fully buffered otherwise - that means that they don't flush until the internal buffer is full, the program explicitly flushes them, or the program ends.
To work around this problem, you have to make your program pretend to be a terminal. To do this, you can use your system's pseudo-terminal APIs (try man 7 pty). The end result is a pair of file descriptors that sort-of work like a pipe.
Also, as an aside, when select unblocks, you should read exactly once from the triggered file descriptor. If you read more than once, which is possible with the loop you've got there, you risk blocking again on subsequent reads, unless you've got your FD in non-blocking mode.
However, I have to ask: why do you need to interact with the shell in this way? Is it possible to, say, just run a shell script, or use "/bin/sh -c your_command_here" instead? There are relatively few programs that actually need a real terminal to work correctly - the main ones are programs that prompt for a password, like ssh, su or sudo.
Related
I opened a process(GNUplot) from C++ with the popen() function. When I Ctrl+C to terminate the process, GNUplot also receives SIGINT signal. I want to prevent this from happening as it has an unfavorable effect on what I do. (I prefer to handle the signal with my own signal handler function). How do I do that?
I plot using the plot '-' command and iterate through all the values I want to plot. If the gnuplot receives SIGINT in the middle, it might stop plotting in the middle without completing the entire plot. I want it to complete the entire plot. This is the unfavorable effect I have.
popen(3) is running a new shell /bin/sh -c on the command string.
The trap builtin of the shell is handling signals; with an empty first argument it is ignoring it. So you could do
FILE* f = popen("trap '' TERM INT; gnuplot", "w");
BTW, POSIX trap requires the signals to be named without SIG prefix.
But that won't work, since gnuplot itself is explicitly handling signals. There is no way to avoid that outside of gnuplot. But take advantage of the free software nature of gnuplot: download its source code, study it, and patch it to fit your bizarre needs. FWIW, SIGINT and signal appear in several places in the source code of gnuplot-5.0.5.
However, you should consider (instead of using popen) to call the low level system calls explicitly (fork, execve, pipe, dup2, waitpid, signal ...). Read Advanced Linux Programming for details.
I strongly suspect that your question is an XY problem. You don't explain what "unfavorable effect" you actually want to avoid, and I am guessing you might avoid it otherwise.
I plot using the plot '-' command and iterate through all the values I want to plot. If the gnuplot receives SIGINT in the middle, it might stop plotting in the middle without completing the entire plot. I want it to complete the entire plot.
Actually you might set up two or three pipes (one for input, one for output, perhaps one for stderr, as seen on gnuplot side) for gnuplot. You need to go the low level system calls (explicit calls to pipe(2), fork(2) etc etc...). Your program should then have some event loop (probably based upon poll(2)...). And you would send a print "DONE" command to gnuplot after every plot '-' (don't forget to initialize with the appropriate set print '-' or have another pipe for the stderr of gnuplot). Your event loop would then catch that DONE message to synchronize. Read also this.
I had similar problem as you. I'm using tc command with -batch parameter and I need to keep it alive until it exits after reaching limit and is closed. My problem was that I was running two asynchronous popen processes and after throwing an exception, second process was killed. A lot of memory dumps etc. After finding this problem and fixing it I can now handle SIGINT, SIGTERM, ctrl+c without tc proces knowing anything about it. No need for traps or anything similar.
My running process handles stdin by using getchar(). It works fine when I run it in foreground. However if I run it in background and do echo "a">> /proc/pid/fd/0 it won't work. On my system, /proc/pid/fd/0 is as same as /proc/pts/0, so how do I send to the process's stdin so that getchar() can see it? I'm working in C++ over ssh.
When you run multiple programs in background, they still have /dev/pts/XX as their control terminal (and stdin), but they are no longer eligible to read from it -- only shell or foreground task can do that. If they do, they'll get SIGTTIN signal that stops background process:
myaut#zenbook:~$ cat &
[1] 15250
myaut#zenbook:~$
[1]+ Stopped cat
Reasoning for such behavior is simple: multiple programs reading from one source leads to race condition. I.e. when you input to shell who am i, shell will read who, background task #1 will read am and task #2 will read i.
The solution is simple -- do not use pseudo-terminals to transfer data between processes:
Use pipes -- unnamed or named (with mkfifo). They are as simple as reading from stdin. Modern shells also provide coprocesses that allow to avoid named pipes.
Use UNIX sockets in complex cases
If you still need a pseudo-terminal, create a new one for your program with screen or other terminal emulator.
I am writing a program to run a different program over and over, giving it different input to a question each time, checking the output. system("the_program") accomplishes this, but how do I give that program input when it runs scanf()?
The simplest way is to write a file, and pass it to the child using redirection (system("the_program < the_file")).
But, and this is much better, you can make a pipe between your program and the child. The child needs to have its standard input (file descriptor 0) connected to the reading side of the pipe. system is synchronous, so besides pipe and dup2 you need the fork and execve system calls. Luckily, there is a wrapper for this process: popen("the_program", "w"). It returns a FILE* that you can write to. Close the FILE* with pclose, and be sure to read the manual because it is different from fclose!
In the case where you are writing both parent and child programs, there is no need to a solve a problem arising by simulating something else, when we can just pass in arguments:
system('./the_program the_scanf_input')
And of course the_program:
var = argv[1] //this was var = scanf('%', &var)
I have two processes written in C++, piped one after the other. One gives some information to the other's stdin, then they both go on to do something else.
The problem is that the second process hangs inside cin.getline(), even though there's no more data to be exchanged. The solution was for the first process to fclose(stdout), and that works, except when I use the process wrapped up in a script. So apparently the stdout of the script is still open after closing it by the process - which seems fair but in my case, can I close it? Thanks
Since your program doesn't terminate, you can exec your-program in the script instead of just your-program and save an open file descriptor at the writing end of the pipe (and a bunch of other things).
Alternatively, start your program in the background and exit the script.
You can also close the standard output, but if you do that before you start your program, it won't be able to use the closed file descriptor. So you have to close it while the program is running. This is not exactly trivial. I can think of starting the program in the background, closing the standard output (use exec 1>&- for that) and bringing the program back to the foreground.
I'm looking at the code for a c++ program which pipes the contents of a file to more. I don't quite understand it, so I was wondering if someone could write pseudocode for a c++ program that pipes something to something else? Why is it necessary to use fork?
create pipe
fork process
if child:
connect pipe to stdin
exec more
write to pipe
You need fork() so that you can replace stdin of the child before calling, and so that you don't wait for the process before continuing.
You will find your answer precisely here
Why is it necessary to use fork?
When you run a pipeline from the shell, eg.
$ ls | more
what happens? The shell runs two processes (one for ls, one for more). Additionally, the output (STDOUT) of ls is connected to the input (STDIN) of more, by a pipe.
Note that ls and more don't need to know anything about pipes, they just write to (and read from) their STDOUT (and STDIN) respectively. Further, because they're likely to do normal blocking reads and writes, it's essential that they can run concurrently. Otherwise ls could just fill the pipe buffer and block forever before more gets a chance to consume anything.
... pipes something to something else ...
Note also that aside from the concurrency argument, if your something else is another program (like more), it must run in another process. You create this process using fork. If you just run more in the current process (using exec), it would replace your program.
In general, you can use a pipe without fork, but you'll just be communicating within your own process. This means you're either doing non-blocking operations (perhaps in a synchronous co-routine setup), or using multiple threads.