About fork() while building a shell - c++

here is my code.
pid_t fpid=fork();
if(fpid > 0){
wait(&fpid);
}
else{
do_command();
}
The thing is, the function do_command() only execute one line, so I change the do_command() for this:
else{
execlp("/bin/ls","ls","-al",NULL);
cout<<"\ntest<<endl;
}
Also, the ls command was executed but cout command was missing..
What's wrong with my code?
Please excuse my poor English and help me.
Here is my do_command() function declaration:
void do_command(const char *command) {
//all commands with arguments
const char *kernel_address = "/bin/";
char *kernel_command;
strcpy(kernel_command, kernel_address);
strcat(kernel_command, command);
cout << "\nCommand is:" << kernel_command << "\n" << endl;
execlp(kernel_command, command, NULL);
}
Also, there is no any output while function was called in child process

The next line is not printing because the exec family commands only returns on error. If there is no error, it will replace you program and whatever command intended to be executed in exec, is executed. Therefore cout is not working

From the manual of execlp :
The exec() family of functions replaces the current process image with a new process image.
Once your execlp is called, your program is not running anymore. It has been replaced by ls. Thus, it will never reach the cout

Related

Check whether a command executed successfully WITHOUT system functions using fork()

fork() creates a new process by duplicating the calling process in the separate memory space. The execution of the forked process can be checked by checking the pid_t returned value by fork() function.
I used fork() to create some concurrent processes from a single parent. These processes are commands in a shell that no need to be executed.
I'm wondering how I can check whether the command is a valid command that can be executed or not without using the system functions and/or execute them.
#include <iostream>
#include <unistd.h>
#include <string>
int main(){
std::string command = "/bin/ls";
//std::string invalidCommand = "/bin/123";
pid_t pid = fork();
if(pid == -1 || `here I want to check if the command is executable without execution`){
std::cout << "Error in forking or Command is not executable" << std::endl;
}
else if (pid == 0){
std::cout << "Process has been forked and valid to execute" << std::endl;
}
return 0;
}
These processes are commands in a shell that no need to be executed.
I don't fully understand what you want to say with this sentence. However, I think you are not aware how fork() works and how system() is related to fork():
As you already found out, fork() duplicates a running process; this means that the program is now running twice and all variables exist twice (just like if you run your program twice).
system() internally uses fork() to create a copy of the process; in the newly created process it uses one of the exec() variants (such as execvp()) to replace the program in the new process by another program.
Then it uses one of the wait() variants (such as waitpid()) to wait for the new process to finish:
fflush(stdout);
fflush(stderr);
int newpid = fork();
if(newpid == 0)
{
execlp("ls", "ls", "./subdirectory", NULL);
std::cerr << "Could not start \"ls\".\n";
fflush(stderr);
_exit(1);
}
if(newpid < 0)
{
std::cerr << "Not enough memory.\n";
}
else
{
int code;
waitpid(newpid, &code, 0);
if(code == 0) std::cout << "\"ls\" was successful.";
else std::cout << "\"ls\" was not successful.";
}
If you want to have "special" behaviour (such as re-directing stdout to a file), you typically don't use the system() function but you will implement the program the way it is shown above.
I'm wondering how I can check whether the command is a valid command ...
Without running the program it is nearly impossible to find out if some command is executable:
It is possible to find out if a program with some name (e.g. "/usr/bin/ls") is existing and marked as executable using the access() function (this is what command -v or test -x do).
However, this test will not detect if a file mistakenly has the x flag set although the file is a document file and not a program. (This is often the case for files on Windows-formatted media.)
If wait() returns the value passed to the _exit() function, it is also difficult to check if the reason is that exec() failed (this means that the program could not be executed) or if the program that has been started returned the same code that we use in our _exit() function.
You can send some information from the new process to the original process if the exec() function has returned. The exec() function will never return on success. However, sending this information is not that easy: Just setting a variable will not work:
int ls_failed = 0;
int pid = fork();
if(pid == 0)
{
execlp("ls", "ls", "./subdirectory", NULL);
ls_failed = 1;
_exit(1);
}
wait(pid, NULL, 0);
if(ls_failed > 0) std::cout << "Starting \"ls\" failed.";
The two processes behave like you started the programs twice; therefore both processes have their own variables, so the variable ls_failed in the newly started process is not identical to the variable ls_failed in the original process.
std::cout << ...
Please note that std::cout probably internally performs an fwrite(...,stdout). This function will not write directly to the terminal but it will write to some buffer. When the buffer is full, all data is written at once.
When calling fork() the buffer is duplicated; when using _exit() or exec(), the data in the buffer is lost.
This may lead to weird effects:
std::cout << "We are doing some fork() now.";
int pid = fork();
if(pid == 0)
{
std::cout << "\nNow the child process is running().";
_exit(0);
}
waitpid(pid, NULL, 0);
std::cout << "\nThe child process has finished.\n";
Depending on the buffer size we could get the following output:
We are doing some fork() now.
Now the chi ng some fork() now.
The child process has finished.
Therefore, we should perform a fflush(stdout) and a fflush(stderr) before using fork(), an exec() variant or _exit() unless you know that the corresponding buffer (stdout for std::cout and stderr for std::cin) is empty.
you can use the system call wait and check the return value of the command that you attempted to execute, other than that fork would really help you. try reading the man page of wait.
https://www.man7.org/linux/man-pages/man2/wait.2.html

Wrapping a commandline program with pstream

I want to be able to read and write to a program from C++. It seems like pstream can do the job, but I find the documentation difficult to understand and have not yet find an example.
I have setup the following minimum working example. This opens python, which in turn (1) prints hello (2) ask input, and (3) prints hello2:
#include <iostream>
#include <cstdio>
#include "pstream.h"
using namespace std;
int main(){
std::cout << "start";
redi::pstream proc(R"(python -c "if 1:
print 'hello'
raw_input()
print 'hello2'
")");
std::string line;
//std::cout.flush();
while (std::getline(proc.out(), line)){
std::cout << " " << "stdout: " << line << '\n';
}
std::cout << "end";
return 0;
}
If I run this with the "ask input" part commented out (i.e. #raw_input()), I get as output:
start stdout: hello
stdout: hello2
end
But if I leave the "ask input" part in (i.e. uncommented raw_input()), all I get is blank, not even start, but rather what seems like a program waiting for input.
My question is, how can one interact with this pstream, how can one establish a little read-write-read-write session? Why does the program not even show start or the first hello?
EDIT:
I don't seem to be making much progress. I don't think I really grasp what is going on. Here are some further attempts with commentary.
1) It seems like I can successfully feed raw_inputI prove this by writing to the child's stderr:
int main(){
cout << "start" <<endl;
redi::pstream proc(R"(python -c "if 1:
import sys
print 'hello'
sys.stdout.flush()
a = raw_input()
sys.stdin.flush()
sys.stderr.write('hello2 '+ a)
sys.stderr.flush()
")");
string line;
getline(proc.out(), line);
cout << line << endl;
proc.write("foo",3).flush();
cout << "end" << endl;
return 0;
}
output:
start
hello
end
hello2 foo
But it locks if I try to read from the stdout again
int main(){
...
a = raw_input()
sys.stdin.flush()
print 'hello2', a
sys.stdout.flush()
")");
...
proc.write("foo",3).flush();
std::getline(proc.out(), line);
cout << line << endl;
...
}
output
start
hello
2) I can't get the readsome approach to work at all
int main(){
cout << "start" <<endl;
redi::pstream proc(R"(python -c "if 1:
import sys
print 'hello'
sys.stdout.flush()
a = raw_input()
sys.stdin.flush()
")");
std::streamsize n;
char buf[1024];
while ((n = proc.out().readsome(buf, sizeof(buf))) > 0)
std::cout.write(buf, n).flush();
proc.write("foo",3).flush();
cout << "end" << endl;
return 0;
}
output
start
end
Traceback (most recent call last):
File "<string>", line 5, in <module>
IOError: [Errno 32] Broken pipe
The output contains a Python error, it seems like the C++ program finished while the Python pipe were still open.
Question: Can anyone provide a working example of how this sequential communication should be coded?
But if I leave the "ask input" part in (i.e. uncommented raw_input()), all I get is blank, not even start, but rather what seems like a program waiting for input.
The Python process is waiting for input, from its stdin, which is connected to a pipe in your C++ program. If you don't write to the pstream then the Python process will never receive anything.
The reason you don't see "start" is that Python thinks it's not connected to a terminal, so it doesn't bother flushing every write to stdout. Try import sys and then sys.stdout.flush() after printing in the Python program. If you need it to be interactive then you need to flush regularly, or set stdout to non-buffered mode (I don't know how to do that in Python).
You should also be aware that just using getline in a loop will block waiting for more input, and if the Python process is also blocking waiting for input you have a deadlock. See the usage example on the pstreams home page showing how to use readsome() for non-blocking reads. That will allow you to read as much as is available, process it, then send a response back to the child process, so that it produces more output.
EDIT:
I don't think I really grasp what is going on.
Your problems are not really problems with pstreams or python, you're just not thinking through the interactions between two communicating processes and what each is waiting for.
Get a pen and paper and draw state diagrams or some kind of chart that shows where the two processes have got to, and what they are waiting for.
1) It seems like I can successfully feed raw_input
Yes, but you're doing it wrong. raw_input() reads a line, you aren't writing a line, you're writing three characters, "foo". That's not a line.
That means the python process keeps trying to read from its stdin. The parent C++ process writes the three characters then exits, running the pstream destructor which closes the pipes. Closing the pipes causes the Python process so get EOF, so it stops reading (after only getting three characters not a whole line). The Python process then prints to stderr, which is connected to your terminal, because you didn't tell the pstream to attach a pipe to the child's stderr, and so you see that output.
But it locks if I try to read from the stdout again
Because now the parent C++ process doesn't exit, so doesn't close the pipes, so the child Python process doesn't read EOF and keeps waiting for more input. The parent C++ process is also waiting for input, but that will never come.
If you want to send a line to be read by raw_input() then write a newline!
This works fine, because it sends a newline, which causes the Python process to get past the raw_input() line:
cout << "start" <<endl;
redi::pstream proc(R"(python -c "if 1:
import sys
print 'hello'
sys.stdout.flush()
a = raw_input()
print 'hello2', a
sys.stdout.flush()
")");
string line;
getline(proc, line);
cout << line << endl;
proc << "foo" << endl; // write to child FOLLOWED BY NEWLINE!
std::getline(proc, line); // read child's response
cout << line << endl;
cout << "end" << endl;
N.B. you don't need to use proc.out() because you haven't attached a pipe to the process' stderr, so it always reads from proc.out(). You would only need to use that when reading from both stdout and stderr, where you would use proc.out() and proc.err() to distinguish them.
2) I can't get the readsome approach to work at all
Again, you have the same problem that you're only writing three characters and so the Python processes waits forever. The C++ process is trying to read as well, so it also waits forever. Deadlock.
If you fix that by sending a newline (as shown above) you have another problem: the C++ program will run so fast that it will get to the while loop that calls readsome before the Python process has even started. It will find nothing to read in the pipe, and so the first readsome call returns 0 and you exit the loop. Then the C++ program gets to the second while loop, and the child python process still hasn't started printing anything yet, so that loop also reads nothing and exits. Then the whole C++ program exits, and finally the Python child is ready to run and tries to print "hello" but by then it's parent is gone and it can't write to the pipe.
You need readsome to keep trying if there's nothing to read the first time you call it_, so it waits long enough for the first data to be readable.
For your simple program you don't really need readsome because the Python process only writes a single line at a time, so you can just read it with getline. But if it might write more than one line you need to be able to keep reading until there's no more data coming, which readsome can do (it reads only if there's data available). But you also need some way to tell whether more data is still going to come (maybe the child is busy doing some calculations before it sends more data) or if it's really finished. There's no general way to know that, it depends on what the child process is doing. Maybe you need the child to send some sentinel value, like "---END OF RESPONSE---", which the parent can look for to know when to stop trying to read more.
For the purposes of your simple example, let's just assume that if readsome gets more than 4 bytes it received the whole response:
cout << "start" <<endl;
redi::pstream proc(R"(python -c "if 1:
import sys
print 'hello'
sys.stdout.flush()
a = raw_input()
sys.stdin.flush()
print 'hello2', a
sys.stdout.flush()
")");
string reply;
streamsize n;
char buf[1024];
while ((n = proc.readsome(buf, sizeof(buf))) != -1)
{
if (n > 0)
reply.append(buf, n);
else
{
// Didn't read anything. Is that a problem?
// Need to try to process the content of 'reply' and see if
// it's what we're expecting, or if it seems to be incomplete.
//
// Let's assume that if we've already read more than 4 characters
// it's a complete response and there's no more to come:
if (reply.length() > 3)
break;
}
}
cout << reply << std::flush;
proc << "foo" << std::endl;
while (getline(proc, reply)) // maybe use readsome again here
cout << reply << std::endl;
cout << "end" << endl;
This loops while readsome() != -1, so it keeps retrying if it reads nothing and only stops the loop if there's an error. In the loop body it decides what to do if nothing was read. You'll need to insert your own logic in here that makes sense for whatever you're trying to do, but basically if readsome() hasn't read anything yet, then you should loop and retry. That makes the C++ program wait long enough for the Python program to print something.
You'd probably want to split out the while loop into a separate function that reads a whole reply into a std::string and returns it, so that you can re-use that function every time you want to read a response. If the child sends some sentinel value that function would be easy to write, as it would just stop every time it receives the sentinel string.

How to handle the output of terminal command execution from C++, if command doesn't just printing the result, but enters some interactive mode?

This question was inspired by that one.
I've understood how to execute utils from C or C++ code. But how can we get result from some command, that doesn't just printing the result, but enters some interactive mode and works until we press Ctrl+zor something like this? The example of such a command is top.
Usually you don't. You launch the command with options that makes them non interactive.
Technically you could get information from an interactive terminal interface, but you will have a hard time doing it, because in order for the interface to be human like, terminal capabilities are often used (termcaps, ncurses..) which basically works by outputting special characters, so you 'll have to dodge these characters by knowing what is expected when, so except if the interface is quite simple and static (actually even in this case) it's gonna be a pain.
Some applications (such as "dialog") can work interactively and write their final result to a different output stream. In the case of dialog, that is done using stderr (by default). If you have control over the application, you can provide for doing something like that, for passing back information to the calling application.
You simply can access stdin, stdout and stderr. Before you fork create the pipes as needed and fork and after that call execv or any other variant.
An example can be found here:
https://jineshkj.wordpress.com/2006/12/22/how-to-capture-stdin-stdout-and-stderr-of-child-program/
There is also a common used library to capture the output of a child program and react with some new actions on found items. This library was first written for tcl but can also be used for c and c++.
http://expect.sourceforge.net/
With some glue code around the expect lib your source can look like this:
int main()
{
Tcl_Interp *interp = Tcl_CreateInterp();
Expect_Init(interp);
// read from file
int lfd = open( "test.txt", O_RDONLY );
int fd = exp_spawnfd(lfd);
// or read from application
int fd ? exp_spawn("top","top", (char*)0)));
bool cont= true;
Expections set1 =
{
{ exp_glob, "YYY", []( Expection& exp)->void { cout << "found: YYY" << endl; }},
{ exp_regexp, "XXX ([0-9]+)", []( Expection& exp)->void { cout << "found: XXX" << exp.GetResult(1) << endl;}},
{ exp_glob, "END", [&cont]( Expection& exp)->void { cout << "EOF" << endl; cont = false; }}
};
int cnt = 0;
do
{
cout << "Run " << cnt << endl;
cnt++;
Expect ( fd, set1, 2 );
} while ( cont );
cout << "Finish" << endl;
return 0;
}

why can't I pipe output from both execl calls?

Possible duplicates:
How to call execl() in C with the proper arguments?
Grabbing output from exec
Linux Pipes as Input and Output
Using dup2 for piping
Piping for input/output
I've been trying to learn piping in Linux using dup/dup2 and fork the last 3 days. I think I got the hang of it, but when I call two different programs from the child process, it seems that I am only capturing output from the first one called. I don't understand why that is and/or what I'm doing wrong. This is my primary question.
Edit: I think a possible solution is to fork another child and set up pipes with dup2, but I'm more just wondering why the code below doesn't work. What I mean is, I would expect to capture stderr from the first execl call and stdout from the second. This doesn't seem to be happening.
My second question is if I am opening and closing the pipes correctly. If not, I would like to know what I need to add/remove/change.
Here is my code:
#include <stdlib.h>
#include <iostream>
#include <time.h>
#include <sys/wait.h>
#define READ_END 0
#define WRITE_END 1
void parentProc(int* stdoutpipe, int* stderrpipe);
void childProc(int* stdoutpipe, int* stderrpipe);
int main(){
pid_t pid;
int status;
int stdoutpipe[2]; // pipe
int stderrpipe[2]; // pipe
// create a pipe
if (pipe(stdoutpipe) || pipe(stderrpipe)){
std::cerr << "Pipe failed." << std::endl;
return EXIT_FAILURE;
}
// fork a child
pid = fork();
if (pid < 0) {
std::cerr << "Fork failed." << std::endl;
return EXIT_FAILURE;
}
// child process
else if (pid == 0){
childProc(stdoutpipe, stderrpipe);
}
// parent process
else {
std::cout<< "waitpid: " << waitpid(pid, &status, 0)
<<'\n'<<std::endl;
parentProc(stdoutpipe, stderrpipe);
}
return 0;
}
void childProc(int* stdoutpipe, int* stderrpipe){
dup2(stdoutpipe[WRITE_END], STDOUT_FILENO);
close(stdoutpipe[READ_END]);
dup2(stderrpipe[WRITE_END], STDERR_FILENO);
close(stderrpipe[READ_END]);
execl("/bin/bash", "/bin/bash", "foo", NULL);
execl("/bin/ls", "ls", "-1", (char *)0);
// execl("/home/me/outerr", "outerr", "-1", (char *)0);
//char * msg = "Hello from stdout";
//std::cout << msg;
//msg = "Hello from stderr!";
//std::cerr << msg << std::endl;
// close write end now?
}
void parentProc(int* stdoutpipe, int* stderrpipe){
close(stdoutpipe[WRITE_END]);
close(stderrpipe[WRITE_END]);
char buffer[256];
char buffer2[256];
read(stdoutpipe[READ_END], buffer, sizeof(buffer));
std::cout << "stdout: " << buffer << std::endl;
read(stderrpipe[READ_END], buffer2, sizeof(buffer));
std::cout << "stderr: " << buffer2 << std::endl;
// close read end now?
}
When I run this, I get the following output:
yfp> g++ selectTest3.cpp; ./a.out
waitpid: 21423
stdout: hB�(6
stderr: foo: line 1: -bash:: command not found
The source code for the "outerr" binary (commented out above) is simply:
#include <iostream>
int main(){
std::cout << "Hello from stdout" << std::endl;
std::cerr << "Hello from stderr!" << std::endl;
return 0;
}
When I call "outerr," instead of ls or "foo" I get the following output, which I would expect:
yfp> g++ selectTest3.cpp; ./a.out
waitpid: 21439
stdout: Hello from stdout
stderr: Hello from stderr!
On execl
Once you successfully call execl or any other function from the exec family, the original process is completely overwritten by the new process. This implies that the new process never "returns" to the old one. If you have two execl calls in a row, the only way the second one can be executed is if the first one fails.
In order to run two different commands in a row, you have to fork one child to run the first command, wait, fork a second child to run the second command, then (optionally) wait for the second child too.
On read
The read system call does not append a terminating null, so in general you need to look at the return value, which tells you the number of bytes actually read. Then set the following character to null to get a C string, or use the range constructor for std::string.
On pipes
Right now you are using waitpid to wait until the child process has already finished, then reading from the pipes. The problem with this is that if the child process produces a lot of output, then it will block because the pipe gets full and the parent process is not reading from it. The result will be a deadlock, as the child waits for the parent to read, and the parent waits for the child to terminate.
What you should do is use select to wait for input to arrive on either the child's stdout or the child's stderr. When input arrives, read it; this will allow the child to continue. When the child process dies, you'll know because you'll get end of file on both. Then you can safely call wait or waitpid.
The exec family of functions replace the current process image with a new process image. When you execute,
execl("/bin/bash", "/bin/bash", "foo", NULL);
the code from the current process is not executed any more. That's why you never see the result of executing
execl("/bin/ls", "ls", "-1", (char *)0);

Running a C++ program with cmd

When I run a program through command line, once the program ends, cmd instantly closes, so I can't see the output easily. Is there anyway to stop this from happening so I can actually verify the output?
#include<iostream>
using namespace std;
class Exercises {
public:
void sayHello(int x) {
for (int i = 0; i < x; i++)
cout << "Hello!!" << endl;
}
}exercise;
int main() {
exercise.sayHello(4);
return 0;
}
You can also use cin.get();
It will wait for you to press enter or until you close the program.
Following methods can help in keeping the command window till another input is provided.
#include <conio.h>
void main(){
// your program here
getch();
}
Another way is to use
system("pause"); at the end of your program.
You can pause the execution of the program for a certain amount of time with:
sleep(5); // sleep for 5 seconds
You could place that at the end of the program before return 0;.
If you don't mind waiting for a keypress at the end of your program, you could put something in.
The simplest way in Windows is to do:
system("pause");
Don't do this if you are releasing your software though. You can implement the behaviour of the pause command easily enough.
std::cout << "Press any key to continue . . . " << std::flush;
while( !_kbhit() ) Sleep(25);
getch();
That's using stuff from conio.h.
However, I'm concerned about the cmd shell itself closing. When you say you "run with cmd", are you actually running up a shell, then typing in your program name and hitting Enter? If that closes the shell, then something is wrong. More likely, you're running it by double-clicking the file in Explorer, right?