send background process to foreground - c++

A c++ program of mine calls fork() and the child immediately executes another program. I have to put interact with the child, but terminate its parent simultaneously because its executable will be replaced. I somehow need to get the orphan back into the foreground so that I may interact with it via the bash - I am currently only getting its output. So I either need to send the parent to the background, the child to the foreground and then terminate the parent, or send the child to the background immediately when the parent terminates.
To my knowledge, I must set the child to be process group leader before its parent terminates.
With generous borrowing from this thread, I arrived at the following testing ground (note, this is not the full program - it just outlines the procedure):
int main(int argc, char *argcv[])
printf("%i\n", argc);
printf("\nhello, I am %i\n", getpid());
printf("parent is %i\n", getppid());
printf("process leader is %i\n", getsid(getpid()));
int pgrp;
std::stringstream pidstream;
pidstream << tcgetpgrp(STDIN_FILENO);
pidstream >> pgrp;
printf("foreground process group ID %i\n", pgrp);
if(argc==1)
{
int child = fork();
if(!child) {execl("./nameofthisprogram","nameofthisprogram", "foo", NULL);}
else
{
signal(SIGTTOU, SIG_IGN);
usleep(1000*1000*1);
tcsetpgrp(0, child);
tcsetpgrp(1, child);
std::stringstream pidstream2;
pidstream2 << tcgetpgrp(STDIN_FILENO);
pidstream2 >> pgrp;
printf("foreground process group ID %i\n", pgrp);
usleep(1000*1000*3);
return 0;
}
}
// signal(SIGTTOU, SIG_IGN); unnecessary
int input;
int input2;
printf("write something\n");
std::cin >> input;
printf("%i\n", input);
usleep(1000*1000*3);
printf("%i\n", input);
printf("write something else\n");
std::cin >> input2;
usleep(1000*1000*3);
printf("%i\n", input2);
return 0;
With the above code, the parent dies after I get prompted for the first input. If I then delay my answer beyond the parent's death, it picks up the first input character and prints it again. For input2, the program does not wait for my input.
So it seems that after the first character, input is entirely terminated.
Am I approaching this fundamentally wrong, or is it simply a matter of reassigning a few more ids and altering some signals?

I see a few things wrong here.
You're never putting the child process in its own process group; therefore, it remains in the original one and is therefore in the foreground along with the parent.
You're calling tcsetpgrp() twice; it only needs to be called once. Assuming no redirection, stdin and stdout both refer to the terminal and therefore either call would do.
With the above code, the parent dies after I get prompted for the first input. If I then delay my answer beyond the parent's death, it picks up the first input character and prints it again. For input2, the program does not wait for my input. So it seems that after the first character, input is entirely terminated.
What you're observing here is a direct consequence of 1.: since both processes are in the foreground, they're both racing to read from stdin and the outcome is undefined.
I somehow need to get the orphan back into the foreground so that I may interact with it via the bash - I am currently only getting its output.
From what I understand, you would expect to be interacting with the exec'ed child after the fork()/exec(). For that to happen, the child needs to be in its own process group, and needs to be put in the foreground.
int child = fork();
signal(SIGTTOU, SIG_IGN);
if (!child) {
setpgid(0, 0); // Put in its own process group
tcsetpgrp(0, getpgrp()); // Avoid race condition where exec'd program would still be in the background and would try to read from the terminal
execl("./nameofthisprogram","nameofthisprogram", "foo", NULL);
} else {
setpgid(child, child); // Either setpgid call will succeed, depending on how the processes are scheduled.
tcsetpgrp(0, child); // Move child to foreground
}
Notice that we call the setpgid()/tcsetpgrp() pair in both the parent and the child. We do so because we don't know which will be scheduled first, and we want to avoid the race condition where the exec'ed program would attempt to read from stdin (and therefore receive a SIGTTIN which would stop the process) before the parent has had time to put it in the foreground. We also ignore SIGTTOU because we know that either the child or the parent will receive one with the calls to tcsetpgrp().

Related

How to correctly use pipe to transfer data from child process to parent process?

I'm trying to create a function that returns true if execvp is successful and false if it is not. Initially, I didn't use a pipe and the problem was that whenever execvp failed, I get 2 returns, one false and one true (from the parent). Now that I'm piping, I'm never getting a false returned when execvp fails.
I know there are a lot related questions and answers on this topic, but I can't seem to narrow down where my particular error is. What I want is for my variables return_type_child, return_type_parent, and this->return_type to all contain the same value. I expected that in the child process, execvp would fail so the next lines would execute. As a result, I thought that the 3 variables mentioned would all be false, but instead when I print out the value in this->return_type, 1 is displayed.
bool Command::execute() {
this->fork_helper();
return return_type;
}
void Command::fork_helper() {
bool return_type_child = true;
int fd[2];
pipe(fd);
pid_t child;
char *const argv[] = {"zf","-la", nullptr};
child = fork();
if (child > 0) {
wait(NULL);
close(0);
close(fd[1]);
dup(fd[0]);
bool return_type_parent = read(fd[0], &return_type_child, sizeof(return_
this->return_type = return_type_parent;
}
else if (child == 0) {
close(fd[0]);
close(1);
dup(fd[1]);
execvp(argv[0], argv);
this->return_type = false;
return_type_child = false;
write(1,&return_type_child,sizeof(return_type_child));
}
return;
}
I've also tried putting a cout statement after execvp(argv[0], argv), which never ran. Any help is greatly appreciated!
From the code, it seems to be an XY problem (edit: moved this section to the front due to a comment that confirms this). If the goal is to get the exit status of the child, then for that there is the value that wait returns, and no pipes are required:
int stat;
wait(&stat);
Read the manual of wait to figure out how to read it. The value of stat can be tested as follows:
WEXITSTATUS(stat) - If WIFEXITED(stat) != 0, then this are the lower 8 bits of child's call to exit(N) or the return value from main. It might work correctly without checking WIFEXITED, but the standard does not specify that.
WTERMSIG(stat) - If WIFSIGNALED(stat) != 0, then this is the signal number that caused the process to exit (e.g. 11 is segmentation fault). It might work correctly without checking WIFSIGNALED, but the standard does not specify that.
There are several errors in the code. See the added comments:
void Command::fork_helper() {
// File descriptors here: 0=stdin, 1=stdout, 2=stderr
//(and 3..N opened in the program, could also be none).
bool return_type_child = true;
int fd[2];
pipe(fd);
// File descriptors here: 0=stdin, 1=stdout, 2=stderr
//(and 3..N opened in the program, could also be none).
// N+1=fd[0] data exhaust of the pipe
// N+2=fd[1] data intake of the pipe
pid_t child;
char *const argv[] = {"zf","-la", nullptr};
child = fork();
if (child > 0) {
// This code is executed in the parent.
wait(NULL); // wait for the child to complete.
This wait is a potential deadlock: if the child writes enough data to the pipe (usually in the kilobytes), the write blocks and waits for the parent to read the pipe. The parent wait(NULL) waits for the child to complete, which which waits for the parent to read the pipe. This is likely not effecting the code in question, but it is problematic.
close(0);
close(fd[1]);
dup(fd[0]);
// File descriptors here: 0=new stdin=data exhaust of the pipe
// 1=stdout, 2=stderr
// (and 3..N opened in the program, could also be none).
// N+1=fd[0] data exhaust of the pipe (stdin is now a duplicate)
This is problematic since:
the code just lost the original stdin.
The pipe is never closed. You should close fd[0] explicitly, don't close(0),
and don't duplicate fd[0].
It is good idea to avoid having duplicate descriptors, except for having stderr duplicate stdout.
.
bool return_type_parent = read(fd[0], &return_type_child, sizeof(return_
this->return_type = return_type_parent;
}
else if (child == 0) {
// this code runs in the child.
close(fd[0]);
close(1);
dup(fd[1]);
// File descriptors here: 0=stdin, 1=new stdout=pipe intake, 2=stderr
//(and 3..N opened in the program, could also be none).
// N+2=fd[1] pipe intake (new stdout is a duplicate)
This is problematic, since there are two duplicate data intakes to the pipe. In this case it is not critical since they are both closed automatically when the process ends, but it is a bad practice. It is a bad practice, since only closing all the pipe intakes signals END-OF-FILE to the exhaust. Closing one intake but not the other, does not signal END-OF-FILE. Again, in your case it is not causing trouble since the child's exit closes all the intakes.
execvp(argv[0], argv);
The code below the above line is never reached, unless execvp itself failed. The execvp fails only when the file does not exist, or the caller has no permission to execute it. If the executable starts to execute and fails later (possibly even if it fails to read a shared library), then still execvp itself succeeds and never returns. This is because execvp replaces the executable, and the following code is no longer in memory when execvp starts to run the other program.
this->return_type = false;
return_type_child = false;
write(1,&return_type_child,sizeof(return_type_child));
}
return;
}

Closing unused end in pipes

I was reading about pipes in my operating system course and writing some code to understand it better. I have a doubt regardign the following code:
int fd[2]; // CREATING PIPE
pipe(fd);
int status;
int pid=fork();
if(pid==0)
{
// WRITER PROCESS
srand(123);
int arr[3]={1,2,3};
close(fd[0]); // CLOSE UNUSED(READING END)
for(int i=0;i<3;i++)
write(fd[1],&arr[i],sizeof(int));
close(fd[1]); // CLOSE WRITING END AFTER WRITING SO AS READ GETS THE EOF
}
else
{
// READER PROCESS
int arr[10];
int i=0;
int n_bytes;
//close(fd[1]); // CLOSE UNUSED(WRITING END)
while((n_bytes=read(fd[0],&arr[i],sizeof(int)))>0) // READIN IN A LOOP UNTIL END
i++;
close(fd[0]); // CLOSE READING END after reading
for(int j=0;j<i;j++)
cout<<arr[j]<<endl;
while(wait(&status)>0)
;
}
If I run this, the read is getting blocked, if I uncomment the close(fd[1]) command in the reader process, the code runs fine.
That means close(fd[1]) closes the write end and read can proceed.
My doubt is even if i dont close the write end in reader process, it is getting closed at the end of the writer process. So why is still read sys call getting blocked?
Initially, both processes have open file descriptors to both the read and write ends of the pipe.
The OS will only close an end of the pipe when all open file descriptors to it have been closed, so if you don't call close(fd[1]) in the child process one file descriptor will remain open, and the write end of the pipe will not be closed, and read will block waiting for input that will never come.
Two problems:
The first is that due to operator precedence the loop condition n_bytes=read(fd[0],&arr[i],sizeof(int))>0 is really equalt n_bytes = (read(fd[0],&arr[i],sizeof(int)) > 0). That is, you assign the value of the comparison to the variable n_bytes. To correct this add extra parentheses around the assignment, as in (n_bytes=read(fd[0],&arr[i],sizeof(int)))>0.
The second problem is that both the parent and the child process will call wait in a loop. You should only do that in the parent process to wait for the child.

weird behaviour when calling external script

My C++ program is just a very simple while loop in which I grab user command from the console (standard input, stdin) using the getline() blocking function. Every now and then I must call an external bash script for other purposes. The script is not directly related to what the user do, it just do some stuff on the filesystem but it has to print text lines in the console standard output (stdout) to inform the user about the outcome of its computations.
What I get is that as soon as the script starts and prints stuff to stdout, the getline() function behave has it were non-blocking (it is supposed to block until the user inputs some text). As a consequence, the while(1) starts spinning at full speed and the CPU usage skyrockets to a near 100%.
I narrowed down the problem to a single C++ source file which reproduces the problem in the same exact way, here it is:
#include<iostream>
#include<string>
#include<sstream>
#include<iostream>
#include<stdlib.h>
#include<stdio.h>
int main(void)
{
int pid = fork(); // spawn
if(pid > 0)
{
// child thread
system("sleep 5; echo \"you're screwed up!!!\"");
}
else
{
// main thread
std::string input;
while(1)
{
std::cout << std::endl << "command:";
getline(std::cin, input);
}
}
}
In this particular case after 5 seconds the program starts spamming "\ncommand:" on stdout and the only way to stop it is sending a SIGKILL signal. Sometimes you have to press some keys on the keyboard before the program starts spamming text lines.
Let run this code for 10 seconds then press any key on the keyboard. Be sure to be ready to fire the SIGKILL signal to the process from another console. You can use the command killall -9 progname
Did you check if failbit or eof is set?
Try changing the following line of your code
if (pid > 0)
to
if (pid == 0)
fork() returns 0 to child and pid of child to parent. In your example, you are running system() in parent and exiting the parent. The child then becomes orphan process running in a while(1) loop which i guess is messing up with the stdin, stdout.
I have modified your program to run system() in child process.
The basic problem:
if(pid > 0)
{
// child thread
system("sleep 5; echo \"you're screwed up!!!\"");
}
this is the PARENT. ;) The child gets pid : 0.

How to prevent input text breaking while other thread is outputting to the console?

I have 2 threads: one of them is constantly cout'ing to the console some value, let's say increments an int value every second - so every second on the console is 1,2,3... and so on.
Another thread is waiting for user input - with the command cin.
Here is my problem: when I start typing something, when the time comes to cout the int value, my input gets erased from the input field, and put into the console with the int value. So when I want to type in "hello" it looks something like this:
1
2
3
he4
l5
lo6
7
8
Is there a way to prevent my input from getting put to the console, while other thread is writing to the console?
FYI this is needed for a chat app at client side - one thread is listening for messages and outputs this message as soon as it comes in, and the other thread is listening for user input to be sent to a server app.
Usually the terminal itself echos the keys typed. You can turn this off and get your program to echo it. This question will give you pointers on how to do it Hide password input on terminal
You can then just get the one thread to handle output.
If you are a slow typer, then the solution to your problem can be, as I said, making it a single thread, but that may make the app to receive only after it sends.
Another way is to increase your receiving thread's sleep time, which would provide you some more time to type (without interruption)
You could make a GUI (or use ncurses if you really want to work in the console). This way you avoid having std::cout shared by the threads.
I think you could solve this problem with a semaphore. When you have an incoming message you check to see if the user is writing something. If he does you wait until he finishes to print the message.
Is there a way to prevent my input from getting put to the console, while other thread is writing to the console?
It is the other way around. The other thread shouldn't interrupt the display of what you are typing.
Say you have typed "Hel" and then a new message comes in from the other thread. What do you do? How should it be displayed?
Totally disable echoing of what you type and only display it after you hit enter. In this way you can display messages from the different threads properly, in an atomic fashion. The big drawback is that you cannot see what you have typed already... :(
You immediately echo what you type. When the new message comes in, you undo the "Hel", print the new message and print again "Hel" on a new line and you can continue typing. Doable but a bit ugly.
You echo what you type in a separate place. That is, you split somehow the display. In one place you display the posted/received messages in order; and in another place you display what you are typing. You either need a GUI or at least some console library to do this. This would be the nicest solution but perhaps the most difficult to port to another OS due to the library dependencies.
In any case, you need a (preferably internally) synchronized stream that you can safely call from different threads and can write strings into it atomically. That is, you need to write your own synchronized stream class.
Hope this helps.
Well i recently solved this same issue with a basic workaround. This might not be the #1 solution but worked like a charm for me, as a newbie;
#include <iostream> // I/O
#include <Windows.h> // Sleep();
#include <conio.h> // _getch();
#include <string> // MessageBuffer
#include <thread> // Thread
using namespace std;
void ThreadedOutput();
string MessageBuffer; // or make it static
void main()
{
thread output(ThreadedOutput); // Attach the output thread
int count = 0;
char cur = 'a'; // Temporary at start
while (cur != '\r')
{
cur = _getch(); // Take 1 input
if (cur >= 32 && cur <= 126) // Check if input lies in alphanumeric and special keys
{
MessageBuffer += cur; // Store input in buffer
cout << cur; // Output the value user entered
count++;
}
else if (cur == 8) // If input key was backspace
{
cout << "\b \b"; // Move cursor 1 step back, overwrite previous character with space, move cursor 1 step back
MessageBuffer = MessageBuffer.substr(0, MessageBuffer.size() - 1); // Remove last character from buffer
count--;
}
else if (cur == 13) // If input was 'return' key
{
for (int i = 0; i < (signed)MessageBuffer.length(); i++) // Remove the written input
cout << "\b \b";
// "MessageBuffer" has your input, use it somewhere
MessageBuffer = ""; // Clear the buffer
}
}
output.join(); // Join the thread
}
void ThreadedOutput()
{
int i = 0;
while (true)
{
for (int i = 0; i < (signed)MessageBuffer.length(); i++) // Remove the written input
cout << "\b \b";
cout << ++i << endl; // Give parallel output with input
cout << MessageBuffer; // Rewrite the stored buffer
Sleep(1000); // Prevent this example spam
}
}

C++ - Making an event loop

Does anyone know how to make an event loop in c++ without a library? It doesn't have to be cross-platform, I'm on a Mac. Basically, I want the program to run and do nothing until the user presses the up arrow key, then the program will output "You pressed up" or something. All i can think of is having an infinite while or for loop and get input with cin, but I don't think cin can detect arrow keys and I believe it pauses the program until it reaches a '\n';
I would want it to look like this:
void RUN()
{
while(true)
{
// poll events and do something if needed
}
}
int main()
{
RUN();
}
I'm kinda sure it's possible without threads, and I've heard that this can be accomplished with fd_set or something, but I'm not sure how.
Any help would be really appreciated.
EDIT:
The program has to run in the background when there aren't any events. For example, Microsoft Word doesn't stop until the user presses a button, it keeps running. I want something like that, but command-line not GUI.
Since you're talking keyboard input, and not looking for a Mac look and feel, what you want is the UNIX way of doing it. And that is,
1) set the terminal in either raw or cbrk mode (I forget which).
2) now use read() to read single characters at a time.
3) temporarily echo the character read (as an int) so you can find what the up arrow key gives you.
As for the more general event loop question, where the only input device is the keyboard, you sit in a loop, and whenever a key is typed (in raw mode?) you call a routine with the value of the key typed. If you had more input devices, you would need multiple threads each could listen to a different device, putting what they find on a queues (with appropriate locking). The main loop would then check the queue and call a routine appropriately everytime something appears in it.
You can use ncurses and enable cbreak to get the raw input stream.
I've used a while loop with signal handlers. Like this incomplete snippet.
void getSomething()
{
std::cout << "Enter new step size: "; std::cout.flush();
std::cin >> globalVariable;
std::getchar(); // consume enter key.
}
void printCommands()
{
std::cout << "1: do something\n"
<< "q: quit\n"
<< "h: help\n"
<< std::endl;
}
void getCommand()
{
// Output prompt
std::cout << "Enter command ('h' for help): "; std::cout.flush();
// Set terminal to raw mode
int ret = system("stty raw");
// Wait for single character
char input = std::getchar();
// Reset terminal to normal "cooked" mode
ret = system("stty cooked");
std::cout << std::endl;
if (input == 'h') printCommands();
else if (input == '1') getSomething();
else if (input == 'q') {
g_next = true;
g_quit = true;
}
}
void
signalHandler(int signo)
{
if (signo == SIGINT) {
g_next = true;
} else if (signo == SIGQUIT) {
getCommand();
}
}
int main(int argc, char* argv[])
{
signal(SIGINT, signalHandler);
signal(SIGUSR1, signalHandler);
signal(SIGQUIT, signalHandler);
do {
// Stuff
} while (!g_quit);
exit(0);
}
The question has been updated to say "The program has to run in the background ... but command-line not GUI."
All traditional; *NIX shells that can put a program into the background also disconnect the program's standard input from the terminal, so AFAIK, this has become impossible.
This does not need to be Mac specific. The Mac supports *NIX mechanisms for reading characters from a keyboard.
AFAICT all the program is doing is waiting for a character, so it might as well block.
Normally the terminal device, tty (teletype!), is interpreting characters typed on the keyboard before your program can read them from standard input. Specifically the tty device normally buffers an entire line of text, and intercepts the rubout character (and a few others like CTRL+w) to edit the line of text. This pre-processing of characters is called a 'line discipline'
You need to set the tty device driver to stop doing that! Then you can get all of the characters the user types.
You change the device using ioctl or termios on the file descriptor.
Search for e.g. "ioctl tty line discipline raw" to understand the details, and find program examples.
You can set the terminal to 'raw' using the command line program stty.
Please read the stty man page because setting it back can be slightly tricky (NB: if you make a mistake it is often easier to kill the terminal, than try to fix it, because there is not echoing of anything you type)
It is possible that the up-arrow is not a single char, so it will require some byte-at-a-time decoding to avoid blocking at the wrong point in the input stream, i.e. if some input sequences are one character, and others two, or three characters, the decoding needs to happen at each byte to decide if there is a pending byte, or one too many read's might get issued, which would cause the program to block.