calling execve() + changing PWD - c++

I want to write a program A which executes another program B.
It is very important to execute program B from it's directory, cause it turns on program BB who sits in the same directory of B.
I mean:
./B will work
./b/B won't work
I thought about two ways to do so:
do fork(), change the PWD in env, and then call execv()
do fork(), create a temporal variable, envp, and call execve()
Lets say program A sits here: /home/a, and program B and BB sits here: /home/a/b
This is my code of program A who sits in /home/a
#include <iostream>
#include <errno.h>
int main() {
int pid;
char *cmd[20] = {"/home/a/b/B", NULL};
if ((pid = fork()) == 0) {
/*if (putenv("PWD=/home/a/b") < 0) {
fprintf(stderr, "error PWD%s\n", strerror(errno));
}*/
char *envp[20] = {"PWD=/home/a/b", NULL};
execve( cmd[0], cmd, envp);
fprintf(stderr, "error: execv: %s\n", strerror(errno));
exit(0);
} else if (pid < 0) {
fprintf(stderr, "error: fork: %s\n", strerror(errno));
exit(0);
}
fprintf(stderr, "father quits\n");
return 0;
}
I tried both of my solutions, but none of them worked,
I mean, I manage to execute program B, but it can't find program BB.
I also printed program's B's PWD, and it's /home/a/b/ - but still, it cannot execute BB.
Is it possible?
Can someone see what I am I do wrong?
Thanks

You are looking for chdir() instead of the envp manipulation.

Related

Waiting for system call to finish

I've been tasked to create a program that takes a text file that contains a list of programs as input. It then needs to run valgrind on the programs (one at a time) until valgrind ends or until the program hits a max allotted time. I have the program doing everything I need it to do EXCEPT it isn't waiting for valgrind to finish. The code I'm using has this format:
//code up to this point is working properly
pid_t pid = fork();
if(pid == 0){
string s = "sudo valgrind --*options omitted*" + testPath + " &>" + outPath;
system(s.c_str());
exit(0);
}
//code after here seems to also be working properly
I'm running into an issue where the child just calls the system and moves on without waiting for valgrind to finish. As such I'm guessing that system isn't the right call to use, but I don't know what call I should be making. Can anyone tell me how to get the child to wait for valgrind to finish?
I think that you are looking for fork/execv. Here is an example:
http://www.cs.ecu.edu/karl/4630/spr01/example1.html
An other alternative could be popen.
You can fork and exec your program and then wait for it to finish. See the following example.
pid_t pid = vfork();
if(pid == -1)
{
perror("fork() failed");
return -1;
}
else if(pid == 0)
{
char *args[] = {"/bin/sleep", "5", (char *)0};
execv("/bin/sleep", args);
}
int child_status;
int child_pid = wait(&child_status);
printf("Child %u finished with status %d\n", child_pid, child_status);

WEXITSTATUS always returns 0

I am forking a process and running a wc command using execl. Now under correct arguments, it runs fine, but when I give a wrong file name, it fails, but in both the cases the return value of
WEXITSTATUS(status)
is always 0.
I believe there is something wrong with what I am doing, but I'm not sure what is. Reading man pages and Google suggests that I should get a correct value as per the status code.
Here is my code:
#include <iostream>
#include <unistd.h>
int main(int argc, const char * argv[])
{
pid_t pid = fork();
if(pid <0){
printf("error condition");
} else if(pid == 0) {
printf("child process");
execl("/usr/bin/wc", "wc", "-l", "/Users/gabbi/learning/test/xyz.st",NULL);
printf("this happened");
} else {
int status;
wait(&status);
if( WIFEXITED( status ) ) {
std::cout << "Child terminated normally" << std::endl;
printf("exit status is %d",WEXITSTATUS(status));
return 0;
} else {
}
}
}
If you supply a name of non existing file to execl() as 1st argument it fails. If this happens the program leaves without returning any specifiy value. So the default of 0 is returned.
You could fix the for example like this:
#include <errno.h>
...
int main(int argc, const char * argv[])
{
pid_t pid = fork();
if(pid <0){
printf("error condition");
} else if(pid == 0) {
printf("child process");
execl(...); /* In case exec succeeds it never returns. */
perror("execl() failed");
return errno; /* In case exec fails return something different then 0. */
}
...
You are not passing the file name from argv to the child process
Instead of
execl("/usr/bin/wc", "wc", "-l", "/Users/gabbi/learning/test/xyz.st",NULL);
Try this,
execl("/usr/bin/wc", "wc", "-l", argv[1],NULL);
The output I got on my machine
xxx#MyUbuntu:~/cpp$ ./a.out test.txt
6 test.txt
Child terminated normally
exit status is 0
xxx#MyUbuntu:~/cpp$ ./a.out /test.txt
wc: /test.txt: No such file or directory
Child terminated normally
exit status is 1
This was an xcode issue, running from console works fine. I am a Java guy, doing some assignments in CPP. Nevertheless, it might come handy to someone getting stuck at similar issue.

Linux: Executing child process with piped stdin/stdout

Using Linux and C++, I would like a function that does the following:
string f(string s)
{
string r = system("foo < s");
return r;
}
Obviously the above doesn't work, but you get the idea. I have a string s that I would like to pass as the standard input of a child process execution of application "foo", and then I would like to record its standard output to string r and then return it.
What combination of Linux syscalls or POSIX functions should I use?
I'm using Linux 3.0 and do not need the solution to work with older systems.
The code provided by eerpini does not work as written. Note, for example, that the pipe ends that are closed in the parent are used afterwards. Look at
close(wpipefd[1]);
and the subsequent write to that closed descriptor. This is just transposition, but it shows this code has never been used. Below is a version that I have tested. Unfortunately, I changed the code style, so this was not accepted as an edit of eerpini's code.
The only structural change is that I only redirect the I/O in the child (note the dup2 calls are only in the child path.) This is very important, because otherwise the parent's I/O gets messed up. Thanks to eerpini for the initial answer, which I used in developing this one.
#include <stdio.h>
#include <stdlib.h>
#include <unistd.h>
#include <errno.h>
#define PIPE_READ 0
#define PIPE_WRITE 1
int createChild(const char* szCommand, char* const aArguments[], char* const aEnvironment[], const char* szMessage) {
int aStdinPipe[2];
int aStdoutPipe[2];
int nChild;
char nChar;
int nResult;
if (pipe(aStdinPipe) < 0) {
perror("allocating pipe for child input redirect");
return -1;
}
if (pipe(aStdoutPipe) < 0) {
close(aStdinPipe[PIPE_READ]);
close(aStdinPipe[PIPE_WRITE]);
perror("allocating pipe for child output redirect");
return -1;
}
nChild = fork();
if (0 == nChild) {
// child continues here
// redirect stdin
if (dup2(aStdinPipe[PIPE_READ], STDIN_FILENO) == -1) {
exit(errno);
}
// redirect stdout
if (dup2(aStdoutPipe[PIPE_WRITE], STDOUT_FILENO) == -1) {
exit(errno);
}
// redirect stderr
if (dup2(aStdoutPipe[PIPE_WRITE], STDERR_FILENO) == -1) {
exit(errno);
}
// all these are for use by parent only
close(aStdinPipe[PIPE_READ]);
close(aStdinPipe[PIPE_WRITE]);
close(aStdoutPipe[PIPE_READ]);
close(aStdoutPipe[PIPE_WRITE]);
// run child process image
// replace this with any exec* function find easier to use ("man exec")
nResult = execve(szCommand, aArguments, aEnvironment);
// if we get here at all, an error occurred, but we are in the child
// process, so just exit
exit(nResult);
} else if (nChild > 0) {
// parent continues here
// close unused file descriptors, these are for child only
close(aStdinPipe[PIPE_READ]);
close(aStdoutPipe[PIPE_WRITE]);
// Include error check here
if (NULL != szMessage) {
write(aStdinPipe[PIPE_WRITE], szMessage, strlen(szMessage));
}
// Just a char by char read here, you can change it accordingly
while (read(aStdoutPipe[PIPE_READ], &nChar, 1) == 1) {
write(STDOUT_FILENO, &nChar, 1);
}
// done with these in this example program, you would normally keep these
// open of course as long as you want to talk to the child
close(aStdinPipe[PIPE_WRITE]);
close(aStdoutPipe[PIPE_READ]);
} else {
// failed to create child
close(aStdinPipe[PIPE_READ]);
close(aStdinPipe[PIPE_WRITE]);
close(aStdoutPipe[PIPE_READ]);
close(aStdoutPipe[PIPE_WRITE]);
}
return nChild;
}
Since you want bidirectional access to the process, you would have to do what popen does behind the scenes explicitly with pipes. I am not sure if any of this will change in C++, but here is a pure C example :
void piped(char *str){
int wpipefd[2];
int rpipefd[2];
int defout, defin;
defout = dup(stdout);
defin = dup (stdin);
if(pipe(wpipefd) < 0){
perror("Pipe");
exit(EXIT_FAILURE);
}
if(pipe(rpipefd) < 0){
perror("Pipe");
exit(EXIT_FAILURE);
}
if(dup2(wpipefd[0], 0) == -1){
perror("dup2");
exit(EXIT_FAILURE);
}
if(dup2(rpipefd[1], 1) == -1){
perror("dup2");
exit(EXIT_FAILURE);
}
if(fork() == 0){
close(defout);
close(defin);
close(wpipefd[0]);
close(wpipefd[1]);
close(rpipefd[0]);
close(rpipefd[1]);
//Call exec here. Use the exec* family of functions according to your need
}
else{
if(dup2(defin, 0) == -1){
perror("dup2");
exit(EXIT_FAILURE);
}
if(dup2(defout, 1) == -1){
perror("dup2");
exit(EXIT_FAILURE);
}
close(defout);
close(defin);
close(wpipefd[1]);
close(rpipefd[0]);
//Include error check here
write(wpipefd[1], str, strlen(str));
//Just a char by char read here, you can change it accordingly
while(read(rpipefd[0], &ch, 1) != -1){
write(stdout, &ch, 1);
}
}
}
Effectively you do this :
Create pipes and redirect the stdout and stdin to the ends of the two pipes (note that in linux, pipe() creates unidirectional pipes, so you need to use two pipes for your purpose).
Exec will now start a new process which has the ends of the pipes for stdin and stdout.
Close the unused descriptors, write the string to the pipe and then start reading whatever the process might dump to the other pipe.
dup() is used to create a duplicate entry in the file descriptor table. While dup2() changes what the descriptor points to.
Note : As mentioned by Ammo# in his solution, what I provided above is more or less a template, it will not run if you just tried to execute the code since clearly there is a exec* (family of functions) missing, so the child will terminate almost immediately after the fork().
Ammo's code has some error handling bugs. The child process is returning after dup failure instead of exiting. Perhaps the child dups can be replaced with:
if (dup2(aStdinPipe[PIPE_READ], STDIN_FILENO) == -1 ||
dup2(aStdoutPipe[PIPE_WRITE], STDOUT_FILENO) == -1 ||
dup2(aStdoutPipe[PIPE_WRITE], STDERR_FILENO) == -1
)
{
exit(errno);
}
// all these are for use by parent only
close(aStdinPipe[PIPE_READ]);
close(aStdinPipe[PIPE_WRITE]);
close(aStdoutPipe[PIPE_READ]);
close(aStdoutPipe[PIPE_WRITE]);

Multiple fork() Children with pipes

I am trying to make a program in C++ that is creating childs with fork().
It should take childs number from argv and create these childs,every child is creating another one and communicate each other with pipes ....
Example ./a.exe 2
**OUTPUT**
P1 exists
P2 created
Write message: Hello
P1 sending message (“Hello”) to P2
P2 received message (“Hello”) from P1
I get the number from argv, creating the correct number of childs(i think),functions for read and write to pipe,all are OK.
But I have trouble to make it with multiple child!!
My first problem is that if i put more than 2 in argv, child order
is not ascending as it should!(Created shows later)
But my big problem is that if i write
a message with 2 words i can only read first word before space!!I am using scanf.
SOME OF MY CODE
//GETTING,CHECKING ARGV
//OPENING PIPE
pid=fork();
if (pid!=0){
waitpid(pid,&child_status,0);
printf("\n\n****Parent Process:ALL CHILD FINISHED!****");
}
else if (pid==0)
{
printf("\n P%d Exists \n",i);
close (mypipe[READ_END]);
write(mypipe[WRITE_END], msg, 256); /* write pipe*/
close (mypipe[WRITE_END]);
printf("Write a message: \n");
scanf("%s",msg);
printf("\n P%d sending message: '%s' to P%d \n",i,msg,i+1);
do{
childpid[i] = fork();
if (childpid[i] > 0){
/* wait for child to terminate */
waitpid(childpid[i],&child_status,0);
}
else if (childpid[i] == 0)
{
/*child process childpid = 0*/
printf("\n P%d Created \n",i+1);
close (mypipe[WRITE_END]);
read(mypipe[READ_END], msg, 256); /* read pipe */
close (mypipe[READ_END]);
printf("\n P%d received message: '%s' from P%d \n",i+1,msg,i);
exit(0);
return;
}
else{
printf("Child Fork failed");
}
i++;
}
while (i<x);
}
else{
printf("Fork failed");
}
}
I had read other similar questions and tried many things but didn't help!!
Any help will be appreciated!!
Thank you!
You have multiple problems in the code you show, and there is code missing as diagnosed in a comment.
One problem is that you close both ends of the only pipe in the first child, and then close it a second time (but fortunately you ignore the error). More seriously, any subsequent children only have a closed pipe to work with, and that isn't going to help them.
Another problem is that the parent process doesn't close the pipe; it may not matter in this program since you don't try reading to EOF, but in most programs, it would matter.
Another problem is that you don't read the message from standard input until after you've tried to write an uninitialized message to the pipe. It isn't immediately clear whether you should hook up the standard input of the child to the read end of the pipe. You write 256 bytes to the pipe even if the message is not that long.
You run into problems with multiple words because scanf("%s", msg) is designed to read up to the first white space (blank, newline, tab, etc). In this context, I'd probably use fgets() to read the information.
I think you need a new pipe for each child. You should probably be error checking each system call, but that is easier if you have a simple error reporting function, like this:
#include <stdio.h>
#include <stdlib.h>
#include <unistd.h>
#include <stdarg.h>
#include <errno.h>
static char *arg0 = 0;
static void err_exit(const char *fmt, ...)
{
int errnum = errno;
va_list args;
va_start(args, fmt);
fprintf(stderr, "%s (%d): ", arg0, (int)getpid());
vfprintf(stderr, fmt, args);
va_end(args);
if (errnum != 0)
fprintf(stderr, "Error %d: %s\n", errnum, strerror(errnum));
exit(1);
}
int main(int argc, char **argv)
{
int i=1;
int x;
int pid, mypipe[2];
pid_t childpid[256];
int child_status;
char msg[256];
arg0 = argv[0];
if (argc != 2 || (x = atoi(argv[1])) <= 0)
err_exit("Usage: %s num-of-children\n", argv[0]);
if (pipe(mypipe) < 0)
err_exit("pipe error\n");
You have some serious thinking about how you want to organize things so that a message is relayed from one process to the next. This is just a start...the error reporting function may be of some use to you.

popen simultaneous read and write [duplicate]

This question already has answers here:
Can popen() make bidirectional pipes like pipe() + fork()?
(6 answers)
Closed 3 years ago.
Is it possible to read and write to a file descriptor returned by popen. I have an interactive process I'd like to control through C. If this isn't possible with popen, is there any way around it?
As already answered, popen works in one direction. If you need to read and write, You can create a pipe with pipe(), span a new process by fork() and exec functions and then redirect its input and outputs with dup2(). Anyway I prefer exec over popen, as it gives you better control over the process (e.g. you know its pid)
EDITED:
As comments suggested, a pipe can be used in one direction only. Therefore you have to create separate pipes for reading and writing. Since the example posted before was wrong, I deleted it and created a new, correct one:
#include<unistd.h>
#include<sys/wait.h>
#include<sys/prctl.h>
#include<signal.h>
#include<stdlib.h>
#include<string.h>
#include<stdio.h>
int main(int argc, char** argv)
{
pid_t pid = 0;
int inpipefd[2];
int outpipefd[2];
char buf[256];
char msg[256];
int status;
pipe(inpipefd);
pipe(outpipefd);
pid = fork();
if (pid == 0)
{
// Child
dup2(outpipefd[0], STDIN_FILENO);
dup2(inpipefd[1], STDOUT_FILENO);
dup2(inpipefd[1], STDERR_FILENO);
//ask kernel to deliver SIGTERM in case the parent dies
prctl(PR_SET_PDEATHSIG, SIGTERM);
//replace tee with your process
execl("/usr/bin/tee", "tee", (char*) NULL);
// Nothing below this line should be executed by child process. If so,
// it means that the execl function wasn't successfull, so lets exit:
exit(1);
}
// The code below will be executed only by parent. You can write and read
// from the child using pipefd descriptors, and you can send signals to
// the process using its pid by kill() function. If the child process will
// exit unexpectedly, the parent process will obtain SIGCHLD signal that
// can be handled (e.g. you can respawn the child process).
//close unused pipe ends
close(outpipefd[0]);
close(inpipefd[1]);
// Now, you can write to outpipefd[1] and read from inpipefd[0] :
while(1)
{
printf("Enter message to send\n");
scanf("%s", msg);
if(strcmp(msg, "exit") == 0) break;
write(outpipefd[1], msg, strlen(msg));
read(inpipefd[0], buf, 256);
printf("Received answer: %s\n", buf);
}
kill(pid, SIGKILL); //send SIGKILL signal to the child process
waitpid(pid, &status, 0);
}
The reason popen() and friends don't offer bidirectional communication is that it would be deadlock-prone, due to buffering in the subprocess. All the makeshift pipework and socketpair() solutions discussed in the answers suffer from the same problem.
Under UNIX, most commands cannot be trusted to read one line and immediately process it and print it, except if their standard output is a tty. The reason is that stdio buffers output in userspace by default, and defers the write() system call until either the buffer is full or the stdio stream is closed (typically because the program or script is about to exit after having seen EOF on input). If you write to such a program's stdin through a pipe, and now wait for an answer from that program's stdout (without closing the ingress pipe), the answer is stuck in the stdio buffers and will never come out - This is a deadlock.
You can trick some line-oriented programs (eg grep) into not buffering by using a pseudo-tty to talk to them; take a look at libexpect(3). But in the general case, you would have to re-run a different subprocess for each message, allowing to use EOF to signal the end of each message and cause whatever buffers in the command (or pipeline of commands) to be flushed. Obviously not a good thing performance-wise.
See more info about this problem in the perlipc man page (it's for bi-directional pipes in Perl but the buffering considerations apply regardless of the language used for the main program).
You want something often called popen2. Here's a basic implementation without error checking (found by a web search, not my code):
// http://media.unpythonic.net/emergent-files/01108826729/popen2.c
#include <sys/types.h>
#include <unistd.h>
#include <stdlib.h>
#include <stdio.h>
#include <errno.h>
#include "popen2.h"
int popen2(const char *cmdline, struct popen2 *childinfo) {
pid_t p;
int pipe_stdin[2], pipe_stdout[2];
if(pipe(pipe_stdin)) return -1;
if(pipe(pipe_stdout)) return -1;
//printf("pipe_stdin[0] = %d, pipe_stdin[1] = %d\n", pipe_stdin[0], pipe_stdin[1]);
//printf("pipe_stdout[0] = %d, pipe_stdout[1] = %d\n", pipe_stdout[0], pipe_stdout[1]);
p = fork();
if(p < 0) return p; /* Fork failed */
if(p == 0) { /* child */
close(pipe_stdin[1]);
dup2(pipe_stdin[0], 0);
close(pipe_stdout[0]);
dup2(pipe_stdout[1], 1);
execl("/bin/sh", "sh", "-c", cmdline, NULL);
perror("execl"); exit(99);
}
childinfo->child_pid = p;
childinfo->to_child = pipe_stdin[1];
childinfo->from_child = pipe_stdout[0];
close(pipe_stdin[0]);
close(pipe_stdout[1]);
return 0;
}
//#define TESTING
#ifdef TESTING
int main(void) {
char buf[1000];
struct popen2 kid;
popen2("tr a-z A-Z", &kid);
write(kid.to_child, "testing\n", 8);
close(kid.to_child);
memset(buf, 0, 1000);
read(kid.from_child, buf, 1000);
printf("kill(%d, 0) -> %d\n", kid.child_pid, kill(kid.child_pid, 0));
printf("from child: %s", buf);
printf("waitpid() -> %d\n", waitpid(kid.child_pid, NULL, 0));
printf("kill(%d, 0) -> %d\n", kid.child_pid, kill(kid.child_pid, 0));
return 0;
}
#endif
popen() can only open the pipe in read or write mode, not both. Take a look at this thread for a workaround.
In one of netresolve backends I'm talking to a script and therefore I need to write to its stdin and read from its stdout. The following function executes a command with stdin and stdout redirected to a pipe. You can use it and adapt it to your liking.
static bool
start_subprocess(char *const command[], int *pid, int *infd, int *outfd)
{
int p1[2], p2[2];
if (!pid || !infd || !outfd)
return false;
if (pipe(p1) == -1)
goto err_pipe1;
if (pipe(p2) == -1)
goto err_pipe2;
if ((*pid = fork()) == -1)
goto err_fork;
if (*pid) {
/* Parent process. */
*infd = p1[1];
*outfd = p2[0];
close(p1[0]);
close(p2[1]);
return true;
} else {
/* Child process. */
dup2(p1[0], 0);
dup2(p2[1], 1);
close(p1[0]);
close(p1[1]);
close(p2[0]);
close(p2[1]);
execvp(*command, command);
/* Error occured. */
fprintf(stderr, "error running %s: %s", *command, strerror(errno));
abort();
}
err_fork:
close(p2[1]);
close(p2[0]);
err_pipe2:
close(p1[1]);
close(p1[0]);
err_pipe1:
return false;
}
https://github.com/crossdistro/netresolve/blob/master/backends/exec.c#L46
(I used the same code in Can popen() make bidirectional pipes like pipe() + fork()?)
Use forkpty (it's non-standard, but the API is very nice, and you can always drop in your own implementation if you don't have it) and exec the program you want to communicate with in the child process.
Alternatively, if tty semantics aren't to your liking, you could write something like forkpty but using two pipes, one for each direction of communication, or using socketpair to communicate with the external program over a unix socket.
You can't use popen to use two-way pipes.
In fact, some OSs don't support two-way pipes, in which case a socket-pair (socketpair) is the only way to do it.
popen works for me in both directions (read and write)
I have been using a popen() pipe in both directions..
Reading and writing a child process stdin and stdout with the file descriptor returned by popen(command,"w")
It seems to work fine..
I assumed it would work before I knew better, and it does.
According posts above this shouldn't work.. which worries me a little bit.
gcc on raspbian (raspbery pi debian)