So I am trying to implement the following command line statement in c++ by using dup2() and execvp(): wc < inputFile.txt then return to my command line. So basically I am forking a process and executing that command in the child process.
However my code the following error: wc: stdin: read: Bad file descriptor
Here is my code:
int file_desc = open(fileName.c_str(), O_WRONLY | O_APPEND);
int stdin = dup(0);
dup2(file_desc,0);
execvp (args2[0], args2); // now execute
dup2(stdin, 0);
So my thought process was that I needed to redirect the standard in (aka index 0 of the file descriptor table) to the file descriptor of the file since at index is always stdin and that's where input is read from. Then after I execute, I replace it back with the original standard in. So I am confused about what I am doing wrong.
The file_desc is opened only for writing (O_WRONLY) - try opening it for reading (O_RDONLY).
You might also want to:
dup2() between fork() and exec() instead of saving and restoring stdin - less system calls and saves a race in multi-threaded apps.
close file_desc in the parent process
close file_desc in the child process after the dup2 (and before the exec)
Related
I've been trying to send data to stdin of a running process. Here is what I do:
In a terminal I've started a c++ program that simply reads a string and prints it. Code excerpt:
while (true) {
cin >> s;
cout << "I've just read " << s << endl;
}
I get the PID of the running program
I go to /proc/PID/fd/
I execute echo text > 0
Result: text appears in the terminal where the program is run. Note, not I've just read text, but simply text.
What am I doing wrong and what should I do to get this thing to print 'I've just read text'?
When you're starting your C++ program you need to make sure its input comes from a pipe but not from a terminal. You may use cat | myapp to do that. Once it's running you may use PID of your application for echo text > /proc/PID/fd/0
It could be a matter of stdout not being properly flushed -- see "Unix Buffering". Or you could be in a different shell as some commentators have suggested.
Generally, it's more reliable to handle basic interprocess communication via FIFOs or NODs -- named pipes. (Or alternatively redirect stdout and/or stderr to a file and read from that with your c++ program.)
Here's some good resources on how to use those in both the terminal and c++.
"FIFO – Named pipes: mkfifo, mknod"
"Using Pipes in Linux Processes"
"Programming with FIFO: mkfifo(), mknod()"
FD 0 is the terminal the program is running from. When you write to FD 0, you are writing to the terminal the program is running from. FD 0 is not required to be opened in read-only mode; in practice it seems to be read/write mode, so you can write to it. (I suspect this is because FDs 0, 1 and 2 all refer to the same file description)
So echo text > /proc/PID/fd/0 just echoes text to the terminal.
To pipe input to the program, you would need to write to the other end of the pipe (actually a PTY, which mostly behaves like a pair of pipes). Most likely, whatever terminal emulator you're using (xterm, konsole, gnome-terminal) will have the other end open, so you could try writing to that.
I am writing a baby program for practice. What I am trying to accomplish is basically a simple little GUI which displays services (for Linux); with buttons to start, stop, enable, and disable services (Much like the msconfig application "Services" tab in Windows). I am using C++ with Qt Creator on Fedora 21.
I want to create the GUI with C++, and populating the GUI with the list of services by calling bash scripts, and calling bash scripts on button clicks to do the appropriate action (enable, disable, etc.)
But when the C++ GUI calls the bash script (using system("path/to/script.sh")) the return value is only for exit success. How do I receive the output of the script itself, so that I can in turn use it to display on the GUI?
For conceptual example: if I were trying to display the output of (systemctl --type service | cut -d " " -f 1) into a GUI I have created in C++, how would I go about doing that? Is this even the correct way to do what I am trying to accomplish? If not,
What is the right way? and
Is there still a way to do it using my current method?
I have looked for a solution to this problem but I can't find information on how to return values from Bash to C++, only how to call Bash scripts from C++.
We're going to take advantage of the popen function, here.
std::string exec(char* cmd) {
FILE* pipe = popen(cmd, "r");
if (!pipe) return "ERROR";
char buffer[128];
std::string result = "";
while(!feof(pipe)) {
if(fgets(buffer, 128, pipe) != NULL)
result += buffer;
}
pclose(pipe);
return result;
}
This function takes a command as an argument, and returns the output as a string.
NOTE: this will not capture stderr! A quick and easy workaround is to redirect stderr to stdout, with 2>&1 at the end of your command.
Here is documentation on popen. Happy coding :)
You have to run the commands using popen instead of system and then loop through the returned file pointer.
Here is a simple example for the command ls -l
#include <stdio.h>
#include <stdlib.h>
int main() {
FILE *process;
char buff[1024];
process = popen("ls -l", "r");
if (process != NULL) {
while (!feof(process)) {
fgets(buff, sizeof(buff), process);
printf("%s", buff);
}
pclose(process);
}
return 0;
}
The long approach - which gives you complete control of stdin, stdout, and stderr of the child process, at the cost of fairly significant complexity - involves using fork and execve directly.
Before forking, set up your endpoints for communication - pipe works well, or socketpair. I'll assume you've invoked something like below:
int childStdin[2], childStdout[2], childStderr[2];
pipe(childStdin);
pipe(childStdout);
pipe(childStderr);
After fork, in child process before execve:
dup2(childStdin[0], 0); // childStdin read end to fd 0 (stdin)
dup2(childStdout[1], 1); // childStdout write end to fd 1 (stdout)
dup2(childStderr[1], 2); // childStderr write end to fd 2 (stderr)
.. then close all of childStdin, childStdout, and childStderr.
After fork, in parent process:
close(childStdin[0]); // parent cannot read from stdin
close(childStdout[1]); // parent cannot write to stdout/stderr
close(childStderr[1]);
Now, your parent process has complete control of the std i/o of the child process - and must safely multiplex childStdin[1], childStdout[0], and childStderr[0], while also monitoring for SIGCLD and eventually using a wait-series call to check the process termination code. pselect is particularly good for dealing with SIGCLD while dealing with std i/o asynchronously. See also select or poll of course.
If you want to merge the child's stdout and stderr, just dup2(childStdout[1], 2) and get rid of childStderr entirely.
The man pages should fill in the blanks from here. So that's the hard way, should you need it.
I am working on Linux and C/C++. I wrote a program with some threads (#include pthread.h) and I run it with sudo.
One thread runs a process (mplayer) and leaves it running by adding " &", so that system() can return quickly.
system("mplayer -loop 0 /mnt/usb/* &");
The mplayer process runs normally and plays music as expected.
After that, I get its process ID by running pidof. Let's say that it returns 2449. A posix mutex is used to write/read that process ID on this thread and on the second thread.
On the second thread I try to write data to mplayer by using the /proc/2449/fd/0 pipe (is it called a pipe or stream?):
system("echo \">\" > /proc/2499/fd/0");
system() returns 0, but the mplayer process does not get anything. The ">" command should play the next track.
Is the stdin stream being inherited by some other process?
There are several fd's listed under the 2449 process, is one of them (besides 0) the stdin stream?
root#pisanlink:/proc# cd 2499
root#pisanlink:/proc/2499# cd fd
root#pisanlink:/proc/2499/fd# ls
0 1 2 3 4 5 7
root#pisanlink:/proc/2499/fd#
I also tried another approach... I used popen() with write permissions. I tried sending the command with fprintf, but mplayer didn't seem to receive anything as well.
If any more code is needed, please let me know.
Any hints will be appreciated. Thanks.
Use popen (not system) to open the process. It will create the process with a pipe that you can either read from or write to (but not both). In your case, you'd open it with "w" for writing. From there you can simply use fwrite to send data to the process' stdin.
Pseudo-code Example:
FILE * pFile = popen("mplayer -loop 0 /mnt/usb/*", "w");
if(pFile == NULL)
// Handle error
// Send ">" to process' stdin
const char * psData = ">";
const size_t nDataLen = strlen(psData);
size_t nNumWritten = fwrite(psData, 1, nDataLen, pFile);
if(nNumWritten != nDataLen)
// Handle error
...
pclose(pFile);
pFile = NULL;
I used the mplayer slave option and the input as a fifo file. It is working correctly.
Create the Linux fifo file with mkfifo:
system("mkfifo /tmp/slpiplay_fifo");
Open mplayer with:
system("mplayer -slave -idle -really-quiet -input file=/tmp/slpiplay_fifo /mnt/usb_slpiplay/* &");
Pass a "next" command to mplayer by using the fifo:
system("echo \"pt_step 1\" >> /tmp/slpiplay_fifo");
Alright, I'm doing a pipe to connect with the children of my process.
First of nothing I tried to do a safeguard of my fds so I can access them later for some stuff, but somehow it just gets stuck when duplicating the fds.
int pipeFd [2];
int pid;
pipe (pipeFd);
//Safeguard of the Original FDs
int fdSG [2];
perror ("fdsg create");
dup2 (1, fdSG [1]);
perror ("dup2 sfg1");
dup2 (0, fdSG [0]);
perror ("dup2 sfg2");
dup2 (pipeFd [1], 1);
The program gets stuck in the last instruction showed here.
The terminal output is the following:
fdsg create: Success
dup2 sfg1: Bad file descriptor
dup2 sfg2: Bad file descriptor
dup2: Bad file descriptor
Does any of you have any clue why this is happening?
From the code you've shown you haven't initalised fdSG. That's not correct, the arguments of dup2 both need to be valid file descriptors.
Since you seem to want to copy a fd rather than replace an existing one you should use dup for those backup copies instead, it picks a free fd and uses that. (Alternatively you could initalise fdSG to be valid fds too).
From the manpage:
dup() uses the lowest-numbered unused descriptor for the new descriptor.
Hey guys I am trying to write a shell with C++ and I am having trouble with the function of using input file with the exec commands. For example, the bc shell in Linux is able to do “bc < text.txt” which calculate the lines in the text in a batch like fashion. I am trying to do likewise with my shell. Something along the lines of:
char* input = “input.txt”;
execlp(input, bc, …..) // I don’t really know how to call the execlp command and all the doc and search have been kind of cryptic for someone just starting out.
Is this even possible with the exec commands? Or will I have to read in line by line and run the exec commands in a for loop??
You can open the file and then dup2() the file descriptor to standard input, or you can close standard input and then open the file (which works because standard input is descriptor 0 and open() returns the lowest numbered available descriptor).
const char *input = "input.txt";
int fd = open(input, O_RDONLY);
if (fd < 0)
throw "could not open file";
if (dup2(fd, 0) != 0) // Testing that the file descriptor is 0
throw "could not dup2";
close(fd); // You don't want two copies of the file descriptor
execvp(command[0], &command[0]);
fprintf(stderr, "failed to execvp %s\n", command[0]);
exit(1);
You would probably want cleverer error handling than the throw, not least because this is the child process and it is the parent that needs to know. But the throw sites mark points where errors are handled.
Note the close().
the redirect is being performed by the shell -- it's not an argument to bc. You can invoke bash (the equivalent of bash -c "bc < text.txt")
For example, you can use execvp with a file argument of "bash" and argument list
"bash"
"-c"
"bc < text.txt"