C++ external program IO - c++

I need to run an external program from within a c++ application. I need the output from that program (i want to see it while the program is still running) and it also needs to get input.
What is the best and most elegant way to redirect the IO? Should it be running in it's own thread? Any examples?
It's running on OSX.
I implemented it like this:
ProgramHandler::ProgramHandler(std::string prog): program(prog){
// Create two pipes
std::cout << "Created Class\n";
pipe(pipe1);
pipe(pipe2);
int id = fork();
std::cout << "id: " << id << std::endl;
if (id == 0)
{
// In child
// Close current `stdin` and `stdout` file handles
close(fileno(stdin));
close(fileno(stdout));
// Duplicate pipes as new `stdin` and `stdout`
dup2(pipe1[0], fileno(stdin));
dup2(pipe2[1], fileno(stdout));
// We don't need the other ends of the pipes, so close them
close(pipe1[1]);
close(pipe2[0]);
// Run the external program
execl("/bin/ls", "bin/ls");
char buffer[30];
while (read(pipe1[0], buffer, 30)) {
std::cout << "Buf: " << buffer << std::endl;
}
}
else
{
// We don't need the read-end of the first pipe (the childs `stdin`)
// or the write-end of the second pipe (the childs `stdout`)
close(pipe1[0]);
close(pipe2[1]);
// Now you can write to `pipe1[1]` and it will end up as `stdin` in the child
// Read from `pipe2[0]` to read from the childs `stdout`
}
}
but as an output i get this:
Created Class
id: 84369
id: 0
I don't understand why it s called twice and why it wont fork the first time. What am I doing/understanding wrong.

If using a POSIX system (like OSX or Linux) then you have to learn the system calls pipe, fork, close, dup2 and exec.
What you do is create two pipes, one for reading from the external application and one for writing. Then you fork to create a new process, and in the child process you set up the pipes as stdin and stdout and then call exec which replaces the child process with an external program using your new stdin and stdout file handles. In the parent process you can not read the output from the child process, and write to its input.
In pseudo-code:
// Create two pipes
pipe(pipe1);
pipe(pipe2);
if (fork() == 0)
{
// In child
// Close current `stdin` and `stdout` file handles
close(FILENO_STDIN);
close(FILENO_STDOUT);
// Duplicate pipes as new `stdin` and `stdout`
dup2(pipe1[0], FILENO_STDIN);
dup2(pipe2[1], FILENO_STDOUT);
// We don't need the other ends of the pipes, so close them
close(pipe1[1]);
close(pipe2[0]);
// Run the external program
exec("/some/program", ...);
}
else
{
// We don't need the read-end of the first pipe (the childs `stdin`)
// or the write-end of the second pipe (the childs `stdout`)
close(pipe1[0]);
close(pipe2[1]);
// Now you can write to `pipe1[1]` and it will end up as `stdin` in the child
// Read from `pipe2[0]` to read from the childs `stdout`
}
Read the manual pages of the system calls for more information about them. You also need to add error checking as all of these system calls may fail.

Well there is a pretty standard way to do this. In general you would like to fork the process and to close the standard I/O (fd 0,1) of the child. Before forking have create two pipes, after forking close the standard input and output in the child and connect them to the pipe, using dup.
Pseudo code, shows only one side of the connection, I'm sure you can figure out the other side.
int main(){
int fd[2]; // file descriptors
pipe(fd);
// Fork child process
if (fork() == 0){
char buffer [80];
close(1);
dup(fd[1]); // this will take the first free discriptor, the one you just closed.
close(fd[1]); // clean up
}else{
close(0);
dup(fd[0]);
close(fd[0]);
}
return 0;
}
After you have the pipe set up and one of the parent threads waiting on a select or something, you can call exec for your external tool and have all the data flowing.

The basic approach to communicate with a different program on POSIX systems is to setup a pipe(), then fork() your program, close() and dup() file descriptors into the correct location, and finally to exec??() the desired executable.
Once this is done, you have your two programs connected with suitable streams. Unfortunately, this doesn't deal with any form of asynchronous processing of the two programs. That is, it is likely that you either want to access the created file descriptor with suitable asynchronous and non-blocking operations (i.e., setup the various file descriptors to be non-blocking and/or access them only when poll() yields results indicating that you can access them). If there is just that one executable it may be easier to control it from a separate thread, though.

A different approach (and if you are also writing the external program) is to use shared memory. Something along the lines of (pseudo code)
// create shared memory
int l_shmid = shmget(key, size ,0600 | IPC_CREAT);
if(l_shmid < 0)
ERROR
// attach to shared memory
dataptr* ptr = (dataptr*)shmat(l_shmid, NULL, 0600);
// run external program
pid_t l_pid = fork();
if(l_pid == (pid_t)-1)
{
ERROR
// detach & delete shared mem
shmdt(ptr);
shmctl(l_shmid,
IPC_RMID,
(shmid_ds *)NULL);
return;
}
else if(l_pid == 0)
{
// child:
execl(path,
args,
NULL);
return;
}
// wait for the external program to finish
int l_stat(0);
waitpid(l_pid, &l_stat, 0);
// read from shmem
memset(mydata, ..,..);
memcpy(mydata, ptr, ...);
// detach & close shared mem
shmdt(ptr);
shmctl(l_shmid,
IPC_RMID,
(shmid_ds *)NULL);
Your external program can write to shared memory in a similar way. No need for pipes & reading/writing/selecting etc.

Related

using file lock for a single instance program fails if spawn child process

I need to have a single instance program in Linux. That is if someone tries to run the program the new instance should print a message and exit. At the moment I have a lock mechanism like this:
main() {
// init some stuff...
// set or check lock
auto pidFile = open("/var/run/my-app.lock", O_CREAT | O_RDWR, 0666);
auto rc = flock(pidFile, LOCK_EX | LOCK_NB);
if(rc) {
if(errno == EWOULDBLOCK) {
cout << "Program is already running!\n";
exit(0);
}
}
// do other stuff or the main loop
// when the loop ends by sigkill or sigterm or ...
exit(0);
}
The problem is if I do anything that spawns child processes using int system(const char *command); and at some point, someone uses "kill" to end this program, the child processes will stay alive and the lock will not get deleted hence preventing my app to run again.
Is there a solution to this shortcoming?
You need O_CLOEXEC as a flag to open(). Then the handle won't be open in the child process.
system() eventually calls exec(), which is how new process binaries are loaded.

Is it possible to redirect child process's stdout to another file in parent process?

A child process runs a bin file, which is provided by Qualcomm.
The child process is invoked by my parent process, which is developed by me.
When the child process is running, it always prints lots pieces of logs in shell command.
So, am I able to redirect Qualcomm's outstream from stdout to another file in the parent process?
As you know, it's nearly impossible to push Qualcomm to update this bin file.
The key piece here is the POSIX function dup2, which lets you essentially replace one file descriptor with another. And if you use fork (not system), you actually have control of what happens in the child process between the fork and the exec* that loads the other executable.
#include <cstdlib>
extern "C" {
#include <fcntl.h>
#include <unistd.h>
}
#include <stdexcept>
#include <iostream>
pid_t start_child(const char* program, const char* output_filename)
{
pid_t pid = fork();
if (pid < 0) {
// fork failed!
std::perror("fork");
throw std::runtime_error("fork failed");
} else if (pid == 0) {
// This code runs in the child process.
int output_fd = open(output_filename, O_WRONLY | O_CREAT | O_TRUNC);
if (output_fd < 0) {
std::cerr << "Failed to open log file " << output_filename << ":"
<< std::endl;
std::perror("open");
std::exit(1);
}
// Replace the child's stdout and stderr handles with the log file handle:
if (dup2(output_fd, STDOUT_FILENO) < 0) {
std::perror("dup2 (stdout)");
std::exit(1);
}
if (dup2(output_fd, STDERR_FILENO) < 0) {
std::perror("dup2 (stderr)");
std::exit(1);
}
if (execl(program, program, (char*)nullptr) < 0) {
// These messages will actually go into the file.
std::cerr << "Failed to exec program " << program << ":"
<< std::endl;
std::perror("execl");
std::exit(1);
}
}
return pid;
}
It is possible, for POSIX, because the POSIX shells do this. Executing a program has two steps, for POSIX. First use fork to clone the parent process to create the child process. Then have the child process use one of the exec family of system calls to execute the chosen program instead of the program of the parent. In between those two steps the code executing for the child process can do additional operations, which will affect the environment of the program to be executed. In particular, the code could open a file descriptor to the file to be redirected to, close the stdout file descriptor, then duplicate the file's file descriptor to the value (1) used for stdout.
You could create own pipes and attach them to the child process.
Create 3 pipes. they are going to replace stdin, stdout, stderr of the child.
fork()
In subprocess close() the parent end of the pipes. Close stdin,stdout and stderr.
The parent process close() the child end of the pipes.
dup2() the pipe ends in the child process that are intended to work as the new stdin,out,err
exec() the child.
Now you got all Output from the child to the pipe in the parent. Ofcourse you need to read from the pipes that come from the child or it will block on any write to the stdout/stderr. For this you could use a select(), poll(), epoll() multiplexing algorithm.
See
https://linux.die.net/man/2/pipe
https://linux.die.net/man/2/dup2
https://linux.die.net/man/2/execve
https://linux.die.net/man/2/fork

C++ both input and output pipe to the external program

I am trying to invoke external program with some input and retrieve the output from it within a program.
It will be look like;
(some input) | (external program) | (retrieve output)
I first thought about using a popen() but it seems like, it is not possible because the pipe is not bidirectional.
Is there any easy way to handle this kind of stuff in linux?
I can try making a temp file but it will be great if it can be handled clearly without accessing the disk.
Any Solution? Thanks.
On linux you can use pipe function: Open two new pipes, one for each direction, then create a child process using fork, afterwards, you typically close the file descriptors not in use (read end on parent, write end on child of the pipe for parent sending to child and vice versa for the other pipe) and then start your application using execve or one of its front ends.
If you dup2 the pipes' file descriptors to the standard console file handles (STDIN_FILENO/STDOUT_FILENO; each process separately), you should even be able to use std::cin/std::cout for communicating with the other process (you might want to do so only for the child, as you might want to keep your console in parent). I have no tested this, though, so that's left to you.
When done, you'd yet wait or waitpid for your child process to terminate. Might look like similar to the following piece of code:
int pipeP2C[2], pipeC2P[2];
// (names: short for pipe for X (writing) to Y with P == parent, C == child)
if(pipe(pipeP2C) != 0 || pipe(pipeC2P) != 0)
{
// error
// TODO: appropriate handling
}
else
{
int pid = fork();
if(pid < 0)
{
// error
// TODO: appropriate handling
}
else if(pid > 0)
{
// parent
// close unused ends:
close(pipeP2C[0]); // read end
close(pipeC2P[1]); // write end
// use pipes to communicate with child...
int status;
waitpid(pid, &status, 0);
// cleanup or do whatever you want to do afterwards...
}
else
{
// child
close(pipeP2C[1]); // write end
close(pipeC2P[0]); // read end
dup2(pipeP2C[0], STDIN_FILENO);
dup2(pipeC2P[1], STDOUT_FILENO);
// you should be able now to close the two remaining
// pipe file desciptors as well as you dup'ed them already
// (confirmed that it is working)
close(pipeP2C[0]);
close(pipeC2P[1]);
execve(/*...*/); // won't return - but you should now be able to
// use stdin/stdout to communicate with parent
}
}

Linux - child reading from pipe receives debug messages sent to standard output

I'm trying to create a parent and a child processes that would communicate through a pipe.
I've setup the child to listen to its parent through a pipe, with a read command running in a while loop.
In order to debug my program I print debug messages to the standard output (note that my read command is set to the pipe with a file descriptor different than 0 or 1).
From some reason these debug messages are being received in the read command of my child process. I can't understand why this is happening. What could be causing this? What elegant solution do I have to solve it (apart from writing to the standard error instead of output)?
This code causes an endless loop because of the cout message that just triggers another read. Why? Notice that the child process exists upon receiving a CHILD_EXIT_CODE signal from parent.
int myPipe[2]
pipe(myPipe);
if(fork() == 0)
{
int readPipe = myPipe[0];
while(true)
{
size_t nBytes = read(readPipe, readBuffer, sizeof(readBuffer));
std::cout << readBuffer << "\n";
int newPosition = atoi(readBuffer);
if(newPosition == CHILD_EXIT_CODE)
{
exit(0);
}
}
}
Edit: Code creating the pipe and fork
I do not know what is doing your parent process (you did not post your code), but because of your description it seems like your parent and child processes are sharing the same stdout stream (the child inherits copies of the parent's set of open file descriptors; see man fork)
I guess, what you should do is to attach stdout and stderr streams in your parent process to the write side of your pipes (you need one more pipe for the stderr stream)
This is what I would try if I were in your situation (in my opinion you are missing dup2):
pid_t pid; /*Child or parent PID.*/
int out[2], err[2]; /*Store pipes file descriptors. Write ends attached to the stdout*/
/*and stderr streams.*/
// Init value as error.
out[0] = out[1] = err[0] = err[1] = -1;
/*Creating pipes, they will be attached to the stderr and stdout streams*/
if (pipe(out) < 0 || pipe(err) < 0) {
/* Error: you should log it */
exit (EXIT_FAILURE);
}
if ((pid=fork()) == -1) {
/* Error: you should log it */
exit (EXIT_FAILURE);
}
if (pid != 0) {
/*Parent process*/
/*Attach stderr and stdout streams to your pipes (their write end)*/
if ((dup2(out[1], 1) < 0) || (dup2(err[1], 2) < 0)) {
/* Error: you should log it */
/* The child is going to be an orphan process you should kill it before calling exit.*/
exit (EXIT_FAILURE);
}
/*WHATEVER YOU DO WITH YOUR PARENT PROCESS*/
/* The child is going to be an orphan process you should kill it before calling exit.*/
exit(EXIT_SUCCESS);
}
else {
/*Child process*/
}
You should not forget a couple of things:
wait or waitpid to release associated memory to child process when it dies. wait or waitpid must be called from parent process.
If you use wait or waitpid you might have to think about blocking SIGCHLD before calling fork and in that case you should unblock SIGCHLD in your child process right after fork, at the beginning of your child process code (A child created via fork(2) inherits a copy of its parent's signal mask; see sigprocmask).
.
Something that many times is forgotten. Be aware of EINTR error. dup2, waitpid/wait, read and many others are affected by this error.
If your parent process dies before your child process you should try to kill the child process if you do not want it to become an orphan one.
Take a look at _exit. Perhaps you should use it in your child process instead of exit.

Child process is blocked by full pipe, cannot read in parent process

I have roughly created the following code to call a child process:
// pipe meanings
const int READ = 0;
const int WRITE = 1;
int fd[2];
// Create pipes
if (pipe(fd))
{
throw ...
}
p_pid = fork();
if (p_pid == 0) // in the child
{
close(fd[READ]);
if (dup2(fd[WRITE], fileno(stdout)) == -1)
{
throw ...
}
close(fd[WRITE]);
// Call exec
execv(argv[0], const_cast<char*const*>(&argv[0]));
_exit(-1);
}
else if (p_pid < 0) // fork has failed
{
throw
}
else // in th parent
{
close(fd[WRITE]);
p_stdout = new std::ifstream(fd[READ]));
}
Now, if the subprocess does not write too much to stdout, I can wait for it to finish and then read the stdout from p_stdout. If it writes too much, the write blocks and the parent waits for it forever.
To fix this, I tried to wait with WNOHANG in the parent, if it is not finished, read all available output from p_stdout using readsome, sleep a bit and try again. Unfortunately, readsome never reads anything:
while (true)
{
if (waitid(P_PID, p_pid, &info, WEXITED | WNOHANG) != 0)
throw ...;
else if (info.si_pid != 0) // waiting has succeeded
break;
char tmp[1024];
size_t sizeRead;
sizeRead = p_stdout->readsome(tmp, 1024);
if (sizeRead > 0)
s_stdout.write(tmp, sizeRead);
sleep(1);
}
The question is: Why does this not work and how can I fix it?
edit: If there is only child, simply using read instead of readsome would probably work, but the process has multiple children and needs to react as soon as one of them terminates.
As sarnold suggested, you need to change the order of your calls. Read first, wait last. Even if your method worked, you might miss the last read. i.e. you exit the loop before you read the last set of bytes that was written.
The problem might be is that ifstream is non-blocking. I've never liked iostreams, even in my C++ projects, I always liked the simplicity of C's stdio functions (i.e. FILE*, fprintf, etc). One way to get around this is to read if the descriptor is readable. You can use select to determine if there is data waiting on that pipe. You're going to need select if you are going to read from multiple children anyway, so might as well learn it now.
As for a quick isreadable function, try something like this (please note I haven't tried compiling this):
bool isreadable(int fd, int timeoutSecs)
{
struct timeval tv = { timeoutSecs, 0 };
fd_set readSet;
FD_ZERO(&readSet);
return select(fds, &readSet, NULL, NULL, &tv) == 1;
}
Then in your parent code, do something like:
while (true) {
if (isreadable(fd[READ], 1)) {
// read fd[READ];
if (bytes <= 0)
break;
}
}
wait(pid);
I'd suggest re-writing the code so that it doesn't call waitpid(2) until after read(2) calls on the pipe return 0 to signify end-of-file. Once you get the end-of-file return from your read calls, you know the child is dead, and you can finally waitpid(2) for it.
Another option is to de-couple the reading from the reaping even further and perform the wait calls in a SIGCHLD signal handler asynchronously to the reading operations.