My C++ program is just a very simple while loop in which I grab user command from the console (standard input, stdin) using the getline() blocking function. Every now and then I must call an external bash script for other purposes. The script is not directly related to what the user do, it just do some stuff on the filesystem but it has to print text lines in the console standard output (stdout) to inform the user about the outcome of its computations.
What I get is that as soon as the script starts and prints stuff to stdout, the getline() function behave has it were non-blocking (it is supposed to block until the user inputs some text). As a consequence, the while(1) starts spinning at full speed and the CPU usage skyrockets to a near 100%.
I narrowed down the problem to a single C++ source file which reproduces the problem in the same exact way, here it is:
#include<iostream>
#include<string>
#include<sstream>
#include<iostream>
#include<stdlib.h>
#include<stdio.h>
int main(void)
{
int pid = fork(); // spawn
if(pid > 0)
{
// child thread
system("sleep 5; echo \"you're screwed up!!!\"");
}
else
{
// main thread
std::string input;
while(1)
{
std::cout << std::endl << "command:";
getline(std::cin, input);
}
}
}
In this particular case after 5 seconds the program starts spamming "\ncommand:" on stdout and the only way to stop it is sending a SIGKILL signal. Sometimes you have to press some keys on the keyboard before the program starts spamming text lines.
Let run this code for 10 seconds then press any key on the keyboard. Be sure to be ready to fire the SIGKILL signal to the process from another console. You can use the command killall -9 progname
Did you check if failbit or eof is set?
Try changing the following line of your code
if (pid > 0)
to
if (pid == 0)
fork() returns 0 to child and pid of child to parent. In your example, you are running system() in parent and exiting the parent. The child then becomes orphan process running in a while(1) loop which i guess is messing up with the stdin, stdout.
I have modified your program to run system() in child process.
The basic problem:
if(pid > 0)
{
// child thread
system("sleep 5; echo \"you're screwed up!!!\"");
}
this is the PARENT. ;) The child gets pid : 0.
Related
fork() creates a new process by duplicating the calling process in the separate memory space. The execution of the forked process can be checked by checking the pid_t returned value by fork() function.
I used fork() to create some concurrent processes from a single parent. These processes are commands in a shell that no need to be executed.
I'm wondering how I can check whether the command is a valid command that can be executed or not without using the system functions and/or execute them.
#include <iostream>
#include <unistd.h>
#include <string>
int main(){
std::string command = "/bin/ls";
//std::string invalidCommand = "/bin/123";
pid_t pid = fork();
if(pid == -1 || `here I want to check if the command is executable without execution`){
std::cout << "Error in forking or Command is not executable" << std::endl;
}
else if (pid == 0){
std::cout << "Process has been forked and valid to execute" << std::endl;
}
return 0;
}
These processes are commands in a shell that no need to be executed.
I don't fully understand what you want to say with this sentence. However, I think you are not aware how fork() works and how system() is related to fork():
As you already found out, fork() duplicates a running process; this means that the program is now running twice and all variables exist twice (just like if you run your program twice).
system() internally uses fork() to create a copy of the process; in the newly created process it uses one of the exec() variants (such as execvp()) to replace the program in the new process by another program.
Then it uses one of the wait() variants (such as waitpid()) to wait for the new process to finish:
fflush(stdout);
fflush(stderr);
int newpid = fork();
if(newpid == 0)
{
execlp("ls", "ls", "./subdirectory", NULL);
std::cerr << "Could not start \"ls\".\n";
fflush(stderr);
_exit(1);
}
if(newpid < 0)
{
std::cerr << "Not enough memory.\n";
}
else
{
int code;
waitpid(newpid, &code, 0);
if(code == 0) std::cout << "\"ls\" was successful.";
else std::cout << "\"ls\" was not successful.";
}
If you want to have "special" behaviour (such as re-directing stdout to a file), you typically don't use the system() function but you will implement the program the way it is shown above.
I'm wondering how I can check whether the command is a valid command ...
Without running the program it is nearly impossible to find out if some command is executable:
It is possible to find out if a program with some name (e.g. "/usr/bin/ls") is existing and marked as executable using the access() function (this is what command -v or test -x do).
However, this test will not detect if a file mistakenly has the x flag set although the file is a document file and not a program. (This is often the case for files on Windows-formatted media.)
If wait() returns the value passed to the _exit() function, it is also difficult to check if the reason is that exec() failed (this means that the program could not be executed) or if the program that has been started returned the same code that we use in our _exit() function.
You can send some information from the new process to the original process if the exec() function has returned. The exec() function will never return on success. However, sending this information is not that easy: Just setting a variable will not work:
int ls_failed = 0;
int pid = fork();
if(pid == 0)
{
execlp("ls", "ls", "./subdirectory", NULL);
ls_failed = 1;
_exit(1);
}
wait(pid, NULL, 0);
if(ls_failed > 0) std::cout << "Starting \"ls\" failed.";
The two processes behave like you started the programs twice; therefore both processes have their own variables, so the variable ls_failed in the newly started process is not identical to the variable ls_failed in the original process.
std::cout << ...
Please note that std::cout probably internally performs an fwrite(...,stdout). This function will not write directly to the terminal but it will write to some buffer. When the buffer is full, all data is written at once.
When calling fork() the buffer is duplicated; when using _exit() or exec(), the data in the buffer is lost.
This may lead to weird effects:
std::cout << "We are doing some fork() now.";
int pid = fork();
if(pid == 0)
{
std::cout << "\nNow the child process is running().";
_exit(0);
}
waitpid(pid, NULL, 0);
std::cout << "\nThe child process has finished.\n";
Depending on the buffer size we could get the following output:
We are doing some fork() now.
Now the chi ng some fork() now.
The child process has finished.
Therefore, we should perform a fflush(stdout) and a fflush(stderr) before using fork(), an exec() variant or _exit() unless you know that the corresponding buffer (stdout for std::cout and stderr for std::cin) is empty.
you can use the system call wait and check the return value of the command that you attempted to execute, other than that fork would really help you. try reading the man page of wait.
https://www.man7.org/linux/man-pages/man2/wait.2.html
I'm trying to create a function that returns true if execvp is successful and false if it is not. Initially, I didn't use a pipe and the problem was that whenever execvp failed, I get 2 returns, one false and one true (from the parent). Now that I'm piping, I'm never getting a false returned when execvp fails.
I know there are a lot related questions and answers on this topic, but I can't seem to narrow down where my particular error is. What I want is for my variables return_type_child, return_type_parent, and this->return_type to all contain the same value. I expected that in the child process, execvp would fail so the next lines would execute. As a result, I thought that the 3 variables mentioned would all be false, but instead when I print out the value in this->return_type, 1 is displayed.
bool Command::execute() {
this->fork_helper();
return return_type;
}
void Command::fork_helper() {
bool return_type_child = true;
int fd[2];
pipe(fd);
pid_t child;
char *const argv[] = {"zf","-la", nullptr};
child = fork();
if (child > 0) {
wait(NULL);
close(0);
close(fd[1]);
dup(fd[0]);
bool return_type_parent = read(fd[0], &return_type_child, sizeof(return_
this->return_type = return_type_parent;
}
else if (child == 0) {
close(fd[0]);
close(1);
dup(fd[1]);
execvp(argv[0], argv);
this->return_type = false;
return_type_child = false;
write(1,&return_type_child,sizeof(return_type_child));
}
return;
}
I've also tried putting a cout statement after execvp(argv[0], argv), which never ran. Any help is greatly appreciated!
From the code, it seems to be an XY problem (edit: moved this section to the front due to a comment that confirms this). If the goal is to get the exit status of the child, then for that there is the value that wait returns, and no pipes are required:
int stat;
wait(&stat);
Read the manual of wait to figure out how to read it. The value of stat can be tested as follows:
WEXITSTATUS(stat) - If WIFEXITED(stat) != 0, then this are the lower 8 bits of child's call to exit(N) or the return value from main. It might work correctly without checking WIFEXITED, but the standard does not specify that.
WTERMSIG(stat) - If WIFSIGNALED(stat) != 0, then this is the signal number that caused the process to exit (e.g. 11 is segmentation fault). It might work correctly without checking WIFSIGNALED, but the standard does not specify that.
There are several errors in the code. See the added comments:
void Command::fork_helper() {
// File descriptors here: 0=stdin, 1=stdout, 2=stderr
//(and 3..N opened in the program, could also be none).
bool return_type_child = true;
int fd[2];
pipe(fd);
// File descriptors here: 0=stdin, 1=stdout, 2=stderr
//(and 3..N opened in the program, could also be none).
// N+1=fd[0] data exhaust of the pipe
// N+2=fd[1] data intake of the pipe
pid_t child;
char *const argv[] = {"zf","-la", nullptr};
child = fork();
if (child > 0) {
// This code is executed in the parent.
wait(NULL); // wait for the child to complete.
This wait is a potential deadlock: if the child writes enough data to the pipe (usually in the kilobytes), the write blocks and waits for the parent to read the pipe. The parent wait(NULL) waits for the child to complete, which which waits for the parent to read the pipe. This is likely not effecting the code in question, but it is problematic.
close(0);
close(fd[1]);
dup(fd[0]);
// File descriptors here: 0=new stdin=data exhaust of the pipe
// 1=stdout, 2=stderr
// (and 3..N opened in the program, could also be none).
// N+1=fd[0] data exhaust of the pipe (stdin is now a duplicate)
This is problematic since:
the code just lost the original stdin.
The pipe is never closed. You should close fd[0] explicitly, don't close(0),
and don't duplicate fd[0].
It is good idea to avoid having duplicate descriptors, except for having stderr duplicate stdout.
.
bool return_type_parent = read(fd[0], &return_type_child, sizeof(return_
this->return_type = return_type_parent;
}
else if (child == 0) {
// this code runs in the child.
close(fd[0]);
close(1);
dup(fd[1]);
// File descriptors here: 0=stdin, 1=new stdout=pipe intake, 2=stderr
//(and 3..N opened in the program, could also be none).
// N+2=fd[1] pipe intake (new stdout is a duplicate)
This is problematic, since there are two duplicate data intakes to the pipe. In this case it is not critical since they are both closed automatically when the process ends, but it is a bad practice. It is a bad practice, since only closing all the pipe intakes signals END-OF-FILE to the exhaust. Closing one intake but not the other, does not signal END-OF-FILE. Again, in your case it is not causing trouble since the child's exit closes all the intakes.
execvp(argv[0], argv);
The code below the above line is never reached, unless execvp itself failed. The execvp fails only when the file does not exist, or the caller has no permission to execute it. If the executable starts to execute and fails later (possibly even if it fails to read a shared library), then still execvp itself succeeds and never returns. This is because execvp replaces the executable, and the following code is no longer in memory when execvp starts to run the other program.
this->return_type = false;
return_type_child = false;
write(1,&return_type_child,sizeof(return_type_child));
}
return;
}
Just for fun I'm trying to write a library that does everything ncurses does, using iostreams and sending escape sequences directly to the terminal.
I'm trying to handle SIGWINCH to tell the library when the terminal is resized. The program responds normally until I resize the terminal, then it stops responding to input, even CTRL-C (although I'm not handling SIGINT, and have the terminal in "raw" mode using termios).
Here's some code snippets I've copied out of my code to show how I've set up the signal handler.
void handle_sigwinch(int sig)
{
if(sig == SIGWINCH)
{
// set a flag here
}
}
void setup_signals()
{
struct sigaction new_sig_action;
new_sig_action.sa_handler = handle_sigwinch;
sigemptyset (&new_sig_action.sa_mask);
new_sig_action.sa_flags = 0;
sigaction (SIGWINCH, NULL, &old_sig_action_);
if (old_sig_action_.sa_handler != SIG_IGN)
{
sigaction (SIGWINCH, &new_sig_action, NULL);
}
}
int main()
{
setup_signals();
int ch;
// exit if ctrl-c is pressed
while((ch == cin.get()) != 3)
{
if(ch > 0)
cout << (char)ch;
}
}
I've tailored my code according to the example provided at https://www.gnu.org/software/libc/manual/html_node/Sigaction-Function-Example.html#Sigaction-Function-Example for setting up the signal handler.
Is there something I've failed to do after handling SIGWINCH that is causing my program to stop working?
Edit: I left out the code where I set up the terminal using cfmakeraw and tcsetattr, and prior to this I sent an escape sequence for putting xterm into the alternate screenbuffer mode.
Thanks to nos's comment, I found through the debugger that the program was running normally, but cin.get() wasn't receiving valid input anymore. So I changed my google search from "program hangs after signal handler" to "input stream broken after signal handler" and found this answer on StackOverflow, which allowed me to realize that the input stream was in an error state after the signal handler was called.
I had placed a check before the input to ignore a character value of -1 (I must have been thinking of the Arduino library read statement when I did that, where -1 is an indicator that no input is available). So the program was basically ignoring errors on the input stream. (I edited my question's code to reflect that omission).
I placed a cin.clear() statement immediately before the read in the loop, and now the program works as expected.
A c++ program of mine calls fork() and the child immediately executes another program. I have to put interact with the child, but terminate its parent simultaneously because its executable will be replaced. I somehow need to get the orphan back into the foreground so that I may interact with it via the bash - I am currently only getting its output. So I either need to send the parent to the background, the child to the foreground and then terminate the parent, or send the child to the background immediately when the parent terminates.
To my knowledge, I must set the child to be process group leader before its parent terminates.
With generous borrowing from this thread, I arrived at the following testing ground (note, this is not the full program - it just outlines the procedure):
int main(int argc, char *argcv[])
printf("%i\n", argc);
printf("\nhello, I am %i\n", getpid());
printf("parent is %i\n", getppid());
printf("process leader is %i\n", getsid(getpid()));
int pgrp;
std::stringstream pidstream;
pidstream << tcgetpgrp(STDIN_FILENO);
pidstream >> pgrp;
printf("foreground process group ID %i\n", pgrp);
if(argc==1)
{
int child = fork();
if(!child) {execl("./nameofthisprogram","nameofthisprogram", "foo", NULL);}
else
{
signal(SIGTTOU, SIG_IGN);
usleep(1000*1000*1);
tcsetpgrp(0, child);
tcsetpgrp(1, child);
std::stringstream pidstream2;
pidstream2 << tcgetpgrp(STDIN_FILENO);
pidstream2 >> pgrp;
printf("foreground process group ID %i\n", pgrp);
usleep(1000*1000*3);
return 0;
}
}
// signal(SIGTTOU, SIG_IGN); unnecessary
int input;
int input2;
printf("write something\n");
std::cin >> input;
printf("%i\n", input);
usleep(1000*1000*3);
printf("%i\n", input);
printf("write something else\n");
std::cin >> input2;
usleep(1000*1000*3);
printf("%i\n", input2);
return 0;
With the above code, the parent dies after I get prompted for the first input. If I then delay my answer beyond the parent's death, it picks up the first input character and prints it again. For input2, the program does not wait for my input.
So it seems that after the first character, input is entirely terminated.
Am I approaching this fundamentally wrong, or is it simply a matter of reassigning a few more ids and altering some signals?
I see a few things wrong here.
You're never putting the child process in its own process group; therefore, it remains in the original one and is therefore in the foreground along with the parent.
You're calling tcsetpgrp() twice; it only needs to be called once. Assuming no redirection, stdin and stdout both refer to the terminal and therefore either call would do.
With the above code, the parent dies after I get prompted for the first input. If I then delay my answer beyond the parent's death, it picks up the first input character and prints it again. For input2, the program does not wait for my input. So it seems that after the first character, input is entirely terminated.
What you're observing here is a direct consequence of 1.: since both processes are in the foreground, they're both racing to read from stdin and the outcome is undefined.
I somehow need to get the orphan back into the foreground so that I may interact with it via the bash - I am currently only getting its output.
From what I understand, you would expect to be interacting with the exec'ed child after the fork()/exec(). For that to happen, the child needs to be in its own process group, and needs to be put in the foreground.
int child = fork();
signal(SIGTTOU, SIG_IGN);
if (!child) {
setpgid(0, 0); // Put in its own process group
tcsetpgrp(0, getpgrp()); // Avoid race condition where exec'd program would still be in the background and would try to read from the terminal
execl("./nameofthisprogram","nameofthisprogram", "foo", NULL);
} else {
setpgid(child, child); // Either setpgid call will succeed, depending on how the processes are scheduled.
tcsetpgrp(0, child); // Move child to foreground
}
Notice that we call the setpgid()/tcsetpgrp() pair in both the parent and the child. We do so because we don't know which will be scheduled first, and we want to avoid the race condition where the exec'ed program would attempt to read from stdin (and therefore receive a SIGTTIN which would stop the process) before the parent has had time to put it in the foreground. We also ignore SIGTTOU because we know that either the child or the parent will receive one with the calls to tcsetpgrp().
In my program I start the thread to redirect output to console:
while(TRUE)
{
if (!ReadFile(hPipeRead,lpBuffer,sizeof(lpBuffer),
&nBytesRead,NULL) || !nBytesRead)
{
if (GetLastError() == ERROR_BROKEN_PIPE)
break; // pipe done - normal exit path.
else
DisplayError("ReadFile"); // Something bad happened.
}
// Display the character read on the screen.
if (!WriteConsole(GetStdHandle(STD_OUTPUT_HANDLE),lpBuffer,
nBytesRead,&nCharsWritten,NULL))
DisplayError("WriteConsole");
}
The problem is that it is a loop. Is it possible to to avoid the run of the loop by making the thread to wait until something appears in the pipe?
The problem becomes more intense when there is an analogous thread for writing.