C/C++ - Run system("process &") and then write to its stdin - c++

I am working on Linux and C/C++. I wrote a program with some threads (#include pthread.h) and I run it with sudo.
One thread runs a process (mplayer) and leaves it running by adding " &", so that system() can return quickly.
system("mplayer -loop 0 /mnt/usb/* &");
The mplayer process runs normally and plays music as expected.
After that, I get its process ID by running pidof. Let's say that it returns 2449. A posix mutex is used to write/read that process ID on this thread and on the second thread.
On the second thread I try to write data to mplayer by using the /proc/2449/fd/0 pipe (is it called a pipe or stream?):
system("echo \">\" > /proc/2499/fd/0");
system() returns 0, but the mplayer process does not get anything. The ">" command should play the next track.
Is the stdin stream being inherited by some other process?
There are several fd's listed under the 2449 process, is one of them (besides 0) the stdin stream?
root#pisanlink:/proc# cd 2499
root#pisanlink:/proc/2499# cd fd
root#pisanlink:/proc/2499/fd# ls
0 1 2 3 4 5 7
root#pisanlink:/proc/2499/fd#
I also tried another approach... I used popen() with write permissions. I tried sending the command with fprintf, but mplayer didn't seem to receive anything as well.
If any more code is needed, please let me know.
Any hints will be appreciated. Thanks.

Use popen (not system) to open the process. It will create the process with a pipe that you can either read from or write to (but not both). In your case, you'd open it with "w" for writing. From there you can simply use fwrite to send data to the process' stdin.
Pseudo-code Example:
FILE * pFile = popen("mplayer -loop 0 /mnt/usb/*", "w");
if(pFile == NULL)
// Handle error
// Send ">" to process' stdin
const char * psData = ">";
const size_t nDataLen = strlen(psData);
size_t nNumWritten = fwrite(psData, 1, nDataLen, pFile);
if(nNumWritten != nDataLen)
// Handle error
...
pclose(pFile);
pFile = NULL;

I used the mplayer slave option and the input as a fifo file. It is working correctly.
Create the Linux fifo file with mkfifo:
system("mkfifo /tmp/slpiplay_fifo");
Open mplayer with:
system("mplayer -slave -idle -really-quiet -input file=/tmp/slpiplay_fifo /mnt/usb_slpiplay/* &");
Pass a "next" command to mplayer by using the fifo:
system("echo \"pt_step 1\" >> /tmp/slpiplay_fifo");

Related

Input redirection using dup2()

So I am trying to implement the following command line statement in c++ by using dup2() and execvp(): wc < inputFile.txt then return to my command line. So basically I am forking a process and executing that command in the child process.
However my code the following error: wc: stdin: read: Bad file descriptor
Here is my code:
int file_desc = open(fileName.c_str(), O_WRONLY | O_APPEND);
int stdin = dup(0);
dup2(file_desc,0);
execvp (args2[0], args2); // now execute
dup2(stdin, 0);
So my thought process was that I needed to redirect the standard in (aka index 0 of the file descriptor table) to the file descriptor of the file since at index is always stdin and that's where input is read from. Then after I execute, I replace it back with the original standard in. So I am confused about what I am doing wrong.
The file_desc is opened only for writing (O_WRONLY) - try opening it for reading (O_RDONLY).
You might also want to:
dup2() between fork() and exec() instead of saving and restoring stdin - less system calls and saves a race in multi-threaded apps.
close file_desc in the parent process
close file_desc in the child process after the dup2 (and before the exec)

Cannot read output of processes launched under cmd.exe pipe

I hope your programming is going well.
I have a question that I hope asserts an easy answer due to my lack of knowledge.
I've used this code from this question - CreateProcess cmd.exe read/write pipes deadlock
And everything works well.
The problem is when I run other commands from the cmd.exe shell that require interactivity, for example, python or powershell, I get the initial output then nothing gets written to the pipe.
So this is what my input/output looks like:
static PCSTR commands[] = { "powershell\r\n", "dir\r\n", "help\r\n"};
ULONG n = RTL_NUMBER_OF(commands);
PCSTR* psz = commands;
do
{
if (MessageBoxW(0,0, L"force close ?", MB_YESNO) == IDYES)
{
DisconnectNamedPipe(hFile);
break;
}
if (p = new U_IRP(&obj))
{
PCSTR command = *psz++;
p->Write(command, (ULONG)strlen(command) * sizeof(CHAR));
p->Release();
}
} while (--n)
When the code runs, I get the initial powershell.exe prompt as so
PS C:\Users>
But after that nothing gets written to the pipe.
The code is using CreateProcess(... "cmd.exe" ...) and I have tried changing it from "cmd.exe" to "cmd.exe /c" and "cmd.exe /k", neither of which work.
Perhaps you would know what I need to do read/write output to interpreted such as python or powershell from a CreateProcess() induced pipe? Thanks for your help!
you exec cmd.exe and send command to it via pipe to exec powershell. then all depended from powershell implementation
on window7:
powershell use ReadConsoleW for got input. so it not use you named pipe - not read from it. and you can note that console window become interactive after you exec powershell. so powershell not accept what you write to pipe (it simply not read from it at all) but read user input from screen. however after you manually input some command to console and press enter - you can got pipe output - powershell use (mix) both - WriteFile and WriteConsoleW for output. some information output via WriteFile and some via WriteConsoleW
on windows10:
powershell use ReadFile for got input. and WriteFile for output. so it read you commands from pipe and write results to it. and all perfect worked. also you can note that console window is inactive in this case - you can not enter any text to it (unlike win7)
so with code all absolute ok. problem only in how 3-rd program read and write data. if it not read from your pipe - you nothing can do here

Sending data to stdin of another process through linux terminal

I've been trying to send data to stdin of a running process. Here is what I do:
In a terminal I've started a c++ program that simply reads a string and prints it. Code excerpt:
while (true) {
cin >> s;
cout << "I've just read " << s << endl;
}
I get the PID of the running program
I go to /proc/PID/fd/
I execute echo text > 0
Result: text appears in the terminal where the program is run. Note, not I've just read text, but simply text.
What am I doing wrong and what should I do to get this thing to print 'I've just read text'?
When you're starting your C++ program you need to make sure its input comes from a pipe but not from a terminal. You may use cat | myapp to do that. Once it's running you may use PID of your application for echo text > /proc/PID/fd/0
It could be a matter of stdout not being properly flushed -- see "Unix Buffering". Or you could be in a different shell as some commentators have suggested.
Generally, it's more reliable to handle basic interprocess communication via FIFOs or NODs -- named pipes. (Or alternatively redirect stdout and/or stderr to a file and read from that with your c++ program.)
Here's some good resources on how to use those in both the terminal and c++.
"FIFO – Named pipes: mkfifo, mknod"
"Using Pipes in Linux Processes"
"Programming with FIFO: mkfifo(), mknod()"
FD 0 is the terminal the program is running from. When you write to FD 0, you are writing to the terminal the program is running from. FD 0 is not required to be opened in read-only mode; in practice it seems to be read/write mode, so you can write to it. (I suspect this is because FDs 0, 1 and 2 all refer to the same file description)
So echo text > /proc/PID/fd/0 just echoes text to the terminal.
To pipe input to the program, you would need to write to the other end of the pipe (actually a PTY, which mostly behaves like a pair of pipes). Most likely, whatever terminal emulator you're using (xterm, konsole, gnome-terminal) will have the other end open, so you could try writing to that.

Exec() read from file

I am working on creating a basic shell. I'm stuck on trying to get exec() to read in from an input file. Here's what I have. I'm unsure what arguments I should be feeding execvp(). Here, stringList[0] will be something along the lines of "ls" or "cat". If stringList[0] is ls the file would contain something along the lines of ls -a -l
int fd = open(iFile, O_RDONLY);
dup2(fd, 0);
close(fd);
execvp(stringList[0], ...);
cout << "Exec error!\n";
exit(1);
It sounds like you want to read a command from a file and then execute that command. If that's your objective, you should actually be executing the shell.
Your current approach of open then dup2 doesn't cause exec to read from a file, because exec never reads from standard input. It only reads from the executable (to load the program image). What your current approach does is redirect input, so that if exec is successful, the new program will have iFile as its standard input file.
You can just do this:
execl(shell, basename(shell), iFile, (char*)0);
Example: if iFile is the string "myCommand.sh", and shell is /bin/bash, then basename(shell) gives bash, and this is similar to running this on the command line:
$ bash myCommand.sh
For shell you probably want to use the current user's default shell. You can obtain this information portably using getpwuid or getpwuid_r.

Returning output from bash script to calling C++ function

I am writing a baby program for practice. What I am trying to accomplish is basically a simple little GUI which displays services (for Linux); with buttons to start, stop, enable, and disable services (Much like the msconfig application "Services" tab in Windows). I am using C++ with Qt Creator on Fedora 21.
I want to create the GUI with C++, and populating the GUI with the list of services by calling bash scripts, and calling bash scripts on button clicks to do the appropriate action (enable, disable, etc.)
But when the C++ GUI calls the bash script (using system("path/to/script.sh")) the return value is only for exit success. How do I receive the output of the script itself, so that I can in turn use it to display on the GUI?
For conceptual example: if I were trying to display the output of (systemctl --type service | cut -d " " -f 1) into a GUI I have created in C++, how would I go about doing that? Is this even the correct way to do what I am trying to accomplish? If not,
What is the right way? and
Is there still a way to do it using my current method?
I have looked for a solution to this problem but I can't find information on how to return values from Bash to C++, only how to call Bash scripts from C++.
We're going to take advantage of the popen function, here.
std::string exec(char* cmd) {
FILE* pipe = popen(cmd, "r");
if (!pipe) return "ERROR";
char buffer[128];
std::string result = "";
while(!feof(pipe)) {
if(fgets(buffer, 128, pipe) != NULL)
result += buffer;
}
pclose(pipe);
return result;
}
This function takes a command as an argument, and returns the output as a string.
NOTE: this will not capture stderr! A quick and easy workaround is to redirect stderr to stdout, with 2>&1 at the end of your command.
Here is documentation on popen. Happy coding :)
You have to run the commands using popen instead of system and then loop through the returned file pointer.
Here is a simple example for the command ls -l
#include <stdio.h>
#include <stdlib.h>
int main() {
FILE *process;
char buff[1024];
process = popen("ls -l", "r");
if (process != NULL) {
while (!feof(process)) {
fgets(buff, sizeof(buff), process);
printf("%s", buff);
}
pclose(process);
}
return 0;
}
The long approach - which gives you complete control of stdin, stdout, and stderr of the child process, at the cost of fairly significant complexity - involves using fork and execve directly.
Before forking, set up your endpoints for communication - pipe works well, or socketpair. I'll assume you've invoked something like below:
int childStdin[2], childStdout[2], childStderr[2];
pipe(childStdin);
pipe(childStdout);
pipe(childStderr);
After fork, in child process before execve:
dup2(childStdin[0], 0); // childStdin read end to fd 0 (stdin)
dup2(childStdout[1], 1); // childStdout write end to fd 1 (stdout)
dup2(childStderr[1], 2); // childStderr write end to fd 2 (stderr)
.. then close all of childStdin, childStdout, and childStderr.
After fork, in parent process:
close(childStdin[0]); // parent cannot read from stdin
close(childStdout[1]); // parent cannot write to stdout/stderr
close(childStderr[1]);
Now, your parent process has complete control of the std i/o of the child process - and must safely multiplex childStdin[1], childStdout[0], and childStderr[0], while also monitoring for SIGCLD and eventually using a wait-series call to check the process termination code. pselect is particularly good for dealing with SIGCLD while dealing with std i/o asynchronously. See also select or poll of course.
If you want to merge the child's stdout and stderr, just dup2(childStdout[1], 2) and get rid of childStderr entirely.
The man pages should fill in the blanks from here. So that's the hard way, should you need it.