This question follows from my attempt to implement the instructions in:
Linux Pipes as Input and Output
How to send a simple string between two programs using pipes?
http://tldp.org/LDP/lpg/node11.html
My question is along the lines of the question in: Linux Pipes as Input and Output, but more specific.
Essentially, I am trying to replace:
/directory/program < input.txt > output.txt
using pipes in C++ in order to avoid using the hard drive. Here's my code:
//LET THE PLUMBING BEGIN
int fd_p2c[2], fd_pFc[2], bytes_read;
// "p2c" = pipe_to_child, "pFc" = pipe_from_child (see above link)
pid_t childpid;
char readbuffer[80];
string program_name;// <---- includes program name + full path
string gulp_command;// <---- includes my line-by-line stdin for program execution
string receive_output = "";
pipe(fd_p2c);//create pipe-to-child
pipe(fd_pFc);//create pipe-from-child
childpid = fork();//create fork
if (childpid < 0)
{
cout << "Fork failed" << endl;
exit(-1);
}
else if (childpid == 0)
{
dup2(0,fd_p2c[0]);//close stdout & make read end of p2c into stdout
close(fd_p2c[0]);//close read end of p2c
close(fd_p2c[1]);//close write end of p2c
dup2(1,fd_pFc[1]);//close stdin & make read end of pFc into stdin
close(fd_pFc[1]);//close write end of pFc
close(fd_pFc[0]);//close read end of pFc
//Execute the required program
execl(program_name.c_str(),program_name.c_str(),(char *) 0);
exit(0);
}
else
{
close(fd_p2c[0]);//close read end of p2c
close(fd_pFc[1]);//close write end of pFc
//"Loop" - send all data to child on write end of p2c
write(fd_p2c[1], gulp_command.c_str(), (strlen(gulp_command.c_str())));
close(fd_p2c[1]);//close write end of p2c
//Loop - receive all data to child on read end of pFc
while (1)
{
bytes_read = read(fd_pFc[0], readbuffer, sizeof(readbuffer));
if (bytes_read <= 0)//if nothing read from buffer...
break;//...break loop
receive_output += readbuffer;//append data to string
}
close(fd_pFc[0]);//close read end of pFc
}
I am absolutely sure that the above strings are initialized properly. However, two things happen that don't make sense to me:
(1) The program I am executing reports that the "input file is empty." Since I am not calling the program with "<" it should not be expecting an input file. Instead, it should be expecting keyboard input. Furthermore, it should be reading the text contained in "gulp_command."
(2) The program's report (provided via standard output) appears in the terminal. This is odd because the purpose of this piping is to transfer stdout to my string "receive_output." But since it is appearing on screen, that indicates to me that the information is not being passed correctly through the pipe to the variable. If I implement the following at the end of the if statement,
cout << receive_output << endl;
I get nothing, as though the string is empty. I appreciate any help you can give me!
EDIT: Clarification
My program currently communicates with another program using text files. My program writes a text file (e.g. input.txt), which is read by the external program. That program then produces output.txt, which is read by my program. So it's something like this:
my code -> input.txt -> program -> output.txt -> my code
Therefore, my code currently uses,
system("program < input.txt > output.txt");
I want to replace this process using pipes. I want to pass my input as standard input to the program, and have my code read the standard output from that program into a string.
Your primary problem is that you have the arguments to dup2() reversed. You need to use:
dup2(fd_p2c[0], 0); // Duplicate read end of pipe to standard input
dup2(fd_pFc[1], 1); // Duplicate write end of pipe to standard output
I got suckered into misreading what you wrote as OK until I put error checking on the set-up code and got unexpected values from the dup2() calls, which told me what the trouble was. When something goes wrong, insert the error checks you skimped on before.
You also did not ensure null termination of the data read from the child; this code does.
Working code (with diagnostics), using cat as the simplest possible 'other command':
#include <unistd.h>
#include <string>
#include <iostream>
using namespace std;
int main()
{
int fd_p2c[2], fd_c2p[2], bytes_read;
pid_t childpid;
char readbuffer[80];
string program_name = "/bin/cat";
string gulp_command = "this is the command data sent to the child cat (kitten?)";
string receive_output = "";
if (pipe(fd_p2c) != 0 || pipe(fd_c2p) != 0)
{
cerr << "Failed to pipe\n";
exit(1);
}
childpid = fork();
if (childpid < 0)
{
cout << "Fork failed" << endl;
exit(-1);
}
else if (childpid == 0)
{
if (dup2(fd_p2c[0], 0) != 0 ||
close(fd_p2c[0]) != 0 ||
close(fd_p2c[1]) != 0)
{
cerr << "Child: failed to set up standard input\n";
exit(1);
}
if (dup2(fd_c2p[1], 1) != 1 ||
close(fd_c2p[1]) != 0 ||
close(fd_c2p[0]) != 0)
{
cerr << "Child: failed to set up standard output\n";
exit(1);
}
execl(program_name.c_str(), program_name.c_str(), (char *) 0);
cerr << "Failed to execute " << program_name << endl;
exit(1);
}
else
{
close(fd_p2c[0]);
close(fd_c2p[1]);
cout << "Writing to child: <<" << gulp_command << ">>" << endl;
int nbytes = gulp_command.length();
if (write(fd_p2c[1], gulp_command.c_str(), nbytes) != nbytes)
{
cerr << "Parent: short write to child\n";
exit(1);
}
close(fd_p2c[1]);
while (1)
{
bytes_read = read(fd_c2p[0], readbuffer, sizeof(readbuffer)-1);
if (bytes_read <= 0)
break;
readbuffer[bytes_read] = '\0';
receive_output += readbuffer;
}
close(fd_c2p[0]);
cout << "From child: <<" << receive_output << ">>" << endl;
}
return 0;
}
Sample output:
Writing to child: <<this is the command data sent to the child cat (kitten?)>>
From child: <<this is the command data sent to the child cat (kitten?)>>
Note that you will need to be careful to ensure you don't get deadlocked with your code. If you have a strictly synchronous protocol (so the parent writes a message and reads a response in lock-step), you should be fine, but if the parent is trying to write a message that's too big to fit in the pipe to the child while the child is trying to write a message that's too big to fit in the pipe back to the parent, then each will be blocked writing while waiting for the other to read.
It sounds like you're looking for coprocesses. You can program them in C/C++ but since they are already available in the (bash) shell, easier to use the shell, right?
First start the external program with the coproc builtin:
coproc external_program
The coproc starts the program in the background and stores the file descriptors to communicate with it in an array shell variable. Now you just need to start your program connecting it to those file descriptors:
your_program <&${COPROC[0]} >&${COPROC[1]}
#include <stdio.h>
#include <unistd.h>
#include <sys/stat.h>
#include <sys/wait.h>
#include <fcntl.h>
#include <string.h>
#include <iostream>
using namespace std;
int main() {
int i, status, len;
char str[10];
mknod("pipe", S_IFIFO | S_IRUSR | S_IWUSR, 0); //create named pipe
pid_t pid = fork(); // create new process
/* Process A */
if (pid == 0) {
int myPipe = open("pipe", O_WRONLY); // returns a file descriptor for the pipe
cout << "\nThis is process A having PID= " << getpid(); //Get pid of process A
cout << "\nEnter the string: ";
cin >> str;
len = strlen(str);
write(myPipe, str, len); //Process A write to the named pipe
cout << "Process A sent " << str;
close(myPipe); //closes the file descriptor fields.
}
/* Process B */
else {
int myPipe = open("pipe", O_RDONLY); //Open the pipe and returns file descriptor
char buffer[21];
int pid_child;
pid_child = wait(&status); //wait until any one child process terminates
int length = read(myPipe, buffer, 20); //reads up to size bytes from pipe with descriptor fields, store results
// in buffer;
cout<< "\n\nThis is process B having PID= " << getpid();//Get pid of process B
buffer[length] = '\0';
cout << "\nProcess B received " << buffer;
i = 0;
//Reverse the string
for (length = length - 1; length >= 0; length--)
str[i++] = buffer[length];
str[i] = '\0';
cout << "\nRevers of string is " << str;
close(myPipe);
}
unlink("pipe");
return 0;
}
Related
This question already has answers here:
How to construct a c++ fstream from a POSIX file descriptor?
(8 answers)
Closed 2 years ago.
I'm new to programming, and I'm trying to write a c++ program for Linux which would create a child process, and this child process would execute an external program. The output of this program should be redirected to the main program and saved into a string variable, preserving all the spaces and new lines. I don't know how many lines/characters will the output contain.
This is the basic idea:
#include <iostream>
#include <string>
#include <cstring>
#include <unistd.h>
#include <sys/wait.h>
int main()
{
int pipeDescriptors[2];
pipe(pipeDescriptors);
pid_t pid = fork();
if (pid == -1)
{
std::cerr << __LINE__ << ": fork() failed!\n" <<
std::strerror(errno) << '\n';
return 1;
}
else if (!pid)
{
// Child process
close(pipeDescriptors[0]); // Not gonna read from here
if (dup2(pipeDescriptors[1], STDOUT_FILENO) == -1) // Redirect output to the pipe
{
std::cerr << __LINE__ << ": dup2() failed!\n" <<
std::strerror(errno) << '\n';
return 1;
}
close(pipeDescriptors[1]); // Not needed anymore
execlp("someExternalProgram", "someExternalProgram", NULL);
}
else
{
// Parent process
close(pipeDescriptors[1]); // Not gonna write here
pid_t stdIn = dup(STDIN_FILENO); // Save the standard input for further usage
if (dup2(pipeDescriptors[0], STDIN_FILENO) == -1) // Redirect input to the pipe
{
std::cerr << __LINE__ << ": dup2() failed!\n" <<
std::strerror(errno) << '\n';
return 1;
}
close(pipeDescriptors[0]); // Not needed anymore
int childExitCode;
wait(&childExitCode);
if (childExitCode == 0)
{
std::string childOutput;
char c;
while (std::cin.read(&c, sizeof(c)))
{
childOutput += c;
}
// Do something with childOutput...
}
if (dup2(stdIn, STDIN_FILENO) == -1) // Restore the standard input
{
std::cerr << __LINE__ << ": dup2() failed!\n" <<
std::strerror(errno) << '\n';
return 1;
}
// Some further code goes here...
}
return 0;
}
The problem with the above code is that when std::cin.get() function reads the last byte in the input stream, it doesn't actually "know" that this byte is the last one and tries to read further, which leads to set failbit and eofbit for std::cin so I cannot read from the standard input later anymore. std::cin.clear() resets those flags, but stdin still remains unusable.
If I could get the precise size in bytes of the stdin content without going beyond the last character in the stream, I would be able to use std::cin.read() for reading this exact amount of bytes into a string variable. But I guess there is no way to do that.
So how can I solve this problem? Should I use an intermediate file for writing the output of the child process into it and reading it later from the parent process?
The child process writes into the pipe but the parent doesn't read the pipe until the child process terminates. If the child writes more than the pipe buffer size it blocks waiting for the parent to read the pipe, but the parent is blocked waiting for the child to terminate leading to a deadlock.
To avoid that, the parent process must keep reading the pipe until EOF and only then use wait to get the child process exit status.
E.g.:
// Read entire child output.
std::string child_stdout{std::istreambuf_iterator<char>{std::cin},
std::istreambuf_iterator<char>{}};
// Get the child exit status.
int childExitCode;
if(wait(&childExitCode))
std::abort(); // wait failed.
You may also like to open a new istream from the pipe file descriptor to avoid messing up std::cin state.
I have been struggling for two days to attempt to fix this final bug in my code, but can't seem to find the error. The code is supposes to(in order):
Receive a string from the user (in this case me)
Create a child process
Send the string to the child process
Rework the string so that every word starts with a capital letter
Send the string back to the parent with the changes
Display the string
The code runs fine until the parent read. An example output is:
Input: "helLO tHerE"
Parent writes "helLO tHerE"
Child reads "helLO tHerE"
Child writes "Hello There"
Parent reads ##$%^$#%^&* - or some other such non-standard characters, then displays error -
double free or corruption (out): 0x00007ffeeebb2690 ***
Below is my code:
#include <iostream>
#include <stdio.h>
#include <unistd.h>
#include <stdlib.h>
#include <string>
#include <algorithm>
using namespace std;
int main(){
int fd[2];
int pfc[2];
int status = 0;
string val = "";
if(pipe(fd) == -1 || pipe(pfc) == -1) fprintf(stderr,"Pipe failed");
pid_t pid = fork();
// fork() returns 0 for child process, child-pid for parent process.
if (pid == 0){ // child: reading only, so close the write-descriptor
string writeval = "";
close(fd[1]);
// now read the data (will block)
read(fd[0], &val, sizeof(val));
cout << "Child reads " << val.c_str() << endl;
string temp = " " + val;
transform(temp.begin(), temp.end(), temp.begin(), ::tolower);
for(size_t i = 1; i < temp.length(); i++){
if(!isspace(temp[i]) && isspace(temp[i-1])){
temp[i] = toupper(temp[i]);
}
}
writeval = temp.substr(1, temp.length() - 1);
// close the read-descriptor
close(fd[0]);
close(pfc[0]);
cout << "Child writes " << writeval.c_str() << endl;
write(pfc[1], &writeval, sizeof(writeval));
close(pfc[1]);
exit(0);
}
else{
string readval = "";
string temp ="";
// parent: writing only, so close read-descriptor.
close(fd[0]);
// send the value on the write-descriptor.
while(getline(cin, temp)){
val += temp;
}
write(fd[1], &val, sizeof(val));
cout << "Parent writes " << val << endl;
// close the write descriptor
close(fd[1]);
//wait(&status);
close(pfc[1]);
read(pfc[0], &readval, sizeof(readval));
cout << "Parent reads " << readval << endl;
close(pfc[0]);
}
return 0;
}
So the answer is simple. In the child process I was passing the memory location of writeval in the write back to the parent method, but in the parent process I was trying to read from the memory location of readval. This is fixed by changing them to be the same variable, outside of the if/else calls, like was done with the variable val.
See here for more details on why this is a problem.
Currently I am making a C/C++ program for the Linux Operating system.
I want to use a named pipe to communicate a PID (process ID) between two programs.
The pipe has been created and is visible in the directory.
The Get PID program says that the file descriptor returns 3, while it should return 0 if it could open the pipe. What am I doing wrong?
Get PID
// Several includes
using namespace std;
int main(int argc, char *argv[]) {
pid_t pid;
int sig = 22;
int succesKill;
int iFIFO;
char sPID[5] = {0,1,2,3,'\0'};
iFIFO = open("IDpipe" , O_RDONLY);
if(iFIFO != 0)
{
cerr << "File descriptor does not return 0, but: " << iFIFO << endl;
return EXIT_FAILURE;
}
read(iFIFO, sPID, strlen(sPID));
cerr << "In sPID now is: " << sPID << endl;
close(iFIFO);
pid = atoi(sPID);
cout << "The PID I will send signals to is: " << pid << "." << endl;
while(1)
{
succesKill = kill(pid, sig);
cout << "Tried to send signal" << endl;
sleep(5);
}
return EXIT_SUCCESS;
}
Send PID
// Several includes
using namespace std;
void catch_function(int signo);
volatile sig_atomic_t iAmountSignals = 0;
int main(void) {
pid_t myPID;
int iFIFO;
char sPID[5] = {'l','e','e','g','\0'};
myPID = getpid();
sprintf(sPID, "%d",myPID);
cout << "My PID is: " << sPID << endl;
iFIFO = open("IDpipe" , O_WRONLY);
if(iFIFO == -1)
{
cerr << "Pipe can't be opened for writing, error: " << errno << endl;
return EXIT_FAILURE;
}
write(iFIFO, sPID, strlen(sPID));
close(iFIFO);
if (signal(22, catch_function) == SIG_ERR) {
cerr << "An error occurred while setting a signal handler." << endl;
return EXIT_FAILURE;
}
cout << "Raising the interactive attention signal." << endl;
if (raise(22) != 0) {
cerr << "Error raising the signal." << endl;
return EXIT_FAILURE;
}
while(1)
{
cout << "iAmountSignals is: " << iAmountSignals << endl;
sleep(1);
}
cout << "Exit." << endl;
return EXIT_SUCCESS;
}
void catch_function(int signo) {
switch(signo) {
case 22:
cout << "Caught a signal 22" << endl;
if(iAmountSignals == 9)
{iAmountSignals = 0;}
else
{++iAmountSignals;}
break;
default:
cerr << "Thats the wrong signal.." << endl;
break;
}
}
Terminal output
Output
open() returns the newly created file descriptor. It cannot return 0 for the simple reason that the new process already has a file descriptor 0. That would be standard input.
The return value of 3 is the expected result from open(), in this case, because that would be the next available file descriptor after standard input, output, and error. If open() couldn't open the file descriptor, it would return -1.
But besides that, your code also has a bunch of other bugs:
sprintf(sPID, "%d",myPID);
// ...
write(iFIFO, sPID, strlen(sPID));
If your process ID happens to be only 3 digits long (which is possible), this will write three bytes to the pipe.
If your process ID happens to be five digits long (which is even more possible), this will write 5 bytes plus the '\0' byte, for a total of six bytes written to the five byte-long sPID buffer, overrunning the array and resulting in undefined behavior.
The actual results are, of course, are undefined, but a typical C++ implementation will end up clobbering the first byte of whatever is the next variable on the stack, which is:
int iFIFO;
which is your file descriptor. So, if your luck runs out and your new process gets a five-digit process id, and this is a little-endian C++ implementation, there is no padding, then the low order byte of iFIFO gets set to 0, and if the code got compiled without any optimizations, the iFIFO file descriptor gets set to 0. Hillarity ensues.
Furthermore, on the other side of the pipe:
char sPID[5] = {0,1,2,3,'\0'};
// ...
read(iFIFO, sPID, strlen(sPID));
Because the first byte of SPID is always set to 0, this will always execute read(iFIFO, sPID, 0), and not read anything.
After that:
pid = atoi(sPID);
atoi() expects a '\0'-terminated string. read() only reads whatever it reads, it will not '\0'-terminate whatever it ends up reading. It is your responsibility to place a '\0' that terminates the read input (and, of course, making sure that the read buffer is big enough), before using atoi().
Your logic appears to be incorrect.
if(iFIFO != 0)
should be
if(iFIFO == -1)
since open returns -1 on error. Otherwise it returns a valid file descriptor.
I have a multi-threaded C++03 application that presently uses popen() to invoke itself (same binary) and ssh (different binary) again in a new process and reads the output, however, when porting to Android NDK this is posing some issues such as not not having permissions to access ssh, so I'm linking in Dropbear ssh to my application to try and avoid that issue. Further, my current popen solution requires that stdout and stderr be merged together into a single FD which is a bit messy and I'd like to stop doing that.
I would think the pipe code could be simplified by using fork() instead but wonder how to drop all of the parent's stack/memory which is not needed in the child of the fork? Here is a snippet of the old working code:
#include <iostream>
#include <stdio.h>
#include <string>
#include <errno.h>
using std::endl;
using std::cerr;
using std::cout;
using std::string;
void
doPipe()
{
// Redirect stderr to stdout with '2>&1' so that we see any error messages
// in the pipe output.
const string selfCmd = "/path/to/self/binary arg1 arg2 arg3 2>&1";
FILE *fPtr = ::popen(selfCmd.c_str(), "r");
const int bufSize = 4096;
char buf[bufSize + 1];
if (fPtr == NULL) {
cerr << "Failed attempt to popen '" << selfCmd << "'." << endl;
} else {
cout << "Result of: '" << selfCmd << "':\n";
while (true) {
if (::fgets(buf, bufSize, fPtr) == NULL) {
if (!::feof(fPtr)) {
cerr << "Failed attempt to fgets '" << selfCmd << "'." << endl;
}
break;
} else {
cout << buf;
}
}
if (pclose(fPtr) == -1) {
if (errno != 10) {
cerr << "Failed attempt to pclose '" << selfCmd << "'." << endl;
}
}
cout << "\n";
}
}
So far, this is loosely what I have done to convert to fork(), but fork needlessly duplicates the entire parent process memory space. Further, it does not quite work, because the parent never sees EOF on the outFD it is reading from the pipe(). Where else do I need to close the FDs for this to work? How can I do something like execlp() without supplying a binary path (not easily available on Android) but instead start over with the same binary and a blank image with new args?
#include <iostream>
#include <stdio.h>
#include <string>
#include <errno.h>
using std::endl;
using std::cerr;
using std::cout;
using std::string;
int
selfAction(int argc, char *argv[], int &outFD, int &errFD)
{
pid_t childPid; // Process id used for current process.
// fd[0] is the read end of the pipe and fd[1] is the write end of the pipe.
int fd[2]; // Pipe for normal communication between parent/child.
int fdErr[2]; // Pipe for error communication between parent/child.
// Create a pipe for IPC between child and parent.
const int pipeResult = pipe(fd);
if (pipeResult) {
cerr << "selfAction normal pipe failed: " << errno << ".\n";
return -1;
}
const int errorPipeResult = pipe(fdErr);
if (errorPipeResult) {
cerr << "selfAction error pipe failed: " << errno << ".\n";
return -1;
}
// Fork - error.
if ((childPid = fork()) < 0) {
cerr << "selfAction fork failed: " << errno << ".\n";
return -1;
} else if (childPid == 0) { // Fork -> child.
// Close read end of pipe.
::close(fd[0]);
::close(fdErr[0]);
// Close stdout and set fd[1] to it, this way any stdout of the child is
// piped to the parent.
::dup2(fd[1], STDOUT_FILENO);
::dup2(fdErr[1], STDERR_FILENO);
// Close write end of pipe.
::close(fd[1]);
::close(fdErr[1]);
// Exit child process.
exit(main(argc, argv));
} else { // Fork -> parent.
// Close write end of pipe.
::close(fd[1]);
::close(fdErr[1]);
// Provide fd's to our caller for stdout and stderr:
outFD = fd[0];
errFD = fdErr[0];
return 0;
}
}
void
doFork()
{
int argc = 4;
char *argv[4] = { "/path/to/self/binary", "arg1", "arg2", "arg3" };
int outFD = -1;
int errFD = -1;
int result = selfAction(argc, argv, outFD, errFD);
if (result) {
cerr << "Failed to execute selfAction." << endl;
return;
}
FILE *outFile = fdopen(outFD, "r");
FILE *errFile = fdopen(errFD, "r");
const int bufSize = 4096;
char buf[bufSize + 1];
if (outFile == NULL) {
cerr << "Failed attempt to open fork file." << endl;
return;
} else {
cout << "Result:\n";
while (true) {
if (::fgets(buf, bufSize, outFile) == NULL) {
if (!::feof(outFile)) {
cerr << "Failed attempt to fgets." << endl;
}
break;
} else {
cout << buf;
}
}
if (::close(outFD) == -1) {
if (errno != 10) {
cerr << "Failed attempt to close." << endl;
}
}
cout << "\n";
}
if (errFile == NULL) {
cerr << "Failed attempt to open fork file err." << endl;
return;
} else {
cerr << "Error result:\n";
while (true) {
if (::fgets(buf, bufSize, errFile) == NULL) {
if (!::feof(errFile)) {
cerr << "Failed attempt to fgets err." << endl;
}
break;
} else {
cerr << buf;
}
}
if (::close(errFD) == -1) {
if (errno != 10) {
cerr << "Failed attempt to close err." << endl;
}
}
cerr << "\n";
}
}
There are two kinds of child processes created in this fashion with different tasks in my application:
SSH to another machine and invoke a server that will communicate back to the parent that is acting as a client.
Compute a signature, delta, or merge file using rsync.
First of all, popen is a very thin wrapper on top of fork() followed by exec() [and some call to pipe and dup and so on to manage the ends of a pipe] .
Second, the memory is only duplicated in form of "copy-on-write" memory - meaning that unless one of the processes writes to some page, the actual physical memory is shared between the two processes.
It does mean, of course, the OS has to create a memory map with 4-8 bytes per 4KB [in typical cases] (probably plus some internal OS data to track how many copies there are of that page and stuff - but as long as the page remains the same one as the parent process, the child page uses the parent processes internal data). Compared to everything else involved in creating a new process and loading an executable file into the new process, it's a pretty small part of the time. Since you are almost immediately doing exec, not much of the parent process' memory will be touched, so very little will happen there.
My advice would be that if popen works, keep using popen. If popen doesn't quite do what you want for some reason, then use fork + exec - but make sure you know what the reason for doing so is.
I am working on a code where it will do Linux command piping. Basically in my code, it will parse the user input command, then run it using the execvp function.
However, to do this, I would need to know the command, as well as its parameters. I have been trying to get the parsing to work correctly, however, it seems that when I do a test case, the output from both of the arrays that store their respective programs is the same. The commands/parameters are stored in a char array called prgname1 and prgname2.
For instance, if I were to run my program with the parameter "ps aux | grep [username]", then the output of prgname1[0] and prgname2[0] are both [username]. They are supposed to be ps and grep, respectively.
Can anyone take a look at my code and see where I might be having an error which is causing this?
Thanks!
#include <sys/wait.h>
#include <stdlib.h>
#include <unistd.h>
#include <string.h>
#include <iostream>
#define MAX_PARA_NUM 5
#define MAX_COMMAND_LEN 1024
using namespace std;
int main(int argc, char *argv[]) {
char *prgname1[MAX_PARA_NUM], *prgname2[MAX_PARA_NUM];
char command[MAX_COMMAND_LEN];
int pfd[2];
pipe(pfd);
pid_t cid1, cid2;
char *full = argv[1];
char str[MAX_COMMAND_LEN];
int i = 0;
int j = 0;
int k = 0;
int ind = 0;
while (ind < strlen(full)) {
if (full[ind] == ' ') {
strncpy(command, str, i);
cout << command << endl;
prgname1[j] = command;
j++;
i = 0;
ind++;
}
else {
str[i] = full[ind];
i++;
ind++;
}
if(full[ind] == '|') {
i = 0;
j = 0;
ind+=2;
while (ind < strlen(full)) {
if (full[ind] == ' ') {
strncpy(command, str, i);
cout << command << endl;
prgname2[j] = command;
j++;
i = 0;
ind++;
}
else {
str[i] = full[ind];
i++;
ind++;
}
if (ind == strlen(full)) {
strncpy(command, str, i);
cout << command << endl;
prgname2[j] = command;
break;
}
}
}
}
// test output here not working correctly
cout << prgname1[0] << endl;
cout << prgname2[0] << endl;
// exits if no parameters passed
if (argc != 2) {
cout << "Usage:" << argv[0] << endl;
exit(EXIT_FAILURE);
}
// exits if there is a pipe error
if (pipe(pfd) == -1) {
cerr << "pipe" << endl;
exit(EXIT_FAILURE);
}
cid1 = fork(); // creates child process 1
// exits if there is a fork error
if (cid1 == -1 || cid2 == -1) {
cerr << "fork";
exit(EXIT_FAILURE);
}
// 1st child process executes and writes to the pipe
if (cid1 == 0) {
char **p = prgname1;
close(1); // closes stdout
dup(pfd[1]); // connects pipe output to stdout
close(pfd[0]); // closes pipe input as it is not needed
close(pfd[1]); // closes pipe output as pipe is connected
execvp(prgname1[0], p);
cerr << "execlp 1 failed" << endl;
cid2 = fork();
}
// 2nd child process reads from the pipe and executes
else if (cid2 == 0) {
char **p = prgname2;
close(0); // closes stdin
dup(pfd[0]); // connects pipe input to stdin
close(pfd[0]); // closes pipe input as pipe is connected
close(pfd[1]); // closes pipe output as it is not needed
execvp(prgname2[0], p);
cerr << "execlp 2 failed" << endl;
}
else {
sleep(1);
waitpid(cid1, NULL, 0);
waitpid(cid2, NULL, 0);
cout << "Program successfully completed" << endl;
exit(EXIT_SUCCESS);
}
return 0;
}
argv[1] gives you the first argument on the command line - not the entire command line. If you want the full list of command line arguments passed into the process, you will need to append argv[1], argv[2], ..., argv[argc - 1] together with a space between each.
Additionally, when you process it, you are setting the pointer for your prgname1[index] to command, so every time you set a given character pointer, they are all pointing to the same location (hence, they are all the same value). You need to allocate space for each element in prgname1 and copy command into it (using strncpy). Alternatively, using std::string and std::vector eliminates much of your current code.