I need to make a program that runs a process (my another programm) and can communicate with this process (sending stdin and recieving stdout).
I have read about functions like popen() and CreateProcess() but I don't really understand how to work with them.
Would be great, if you show me some sample code (how to start process, send stdin, recieve stdout).
C++ functions would be preferred (if there are any).
Thank you in advice.
The interface for POSIX functions C language only. But you can use them in C++.
Basically:
#include <unistd.h>
// Include some other things I forgot. See manpages.
int main()
{
// Open two pipes for communication
// The descriptors will be available to both
// parent and child.
int in_fd[2];
int out_fd[2];
pipe(in_fd); // For child's stdin
pipe(out_fd); // For child's stdout
// Fork
pid_t pid = fork();
if (pid == 0)
{
// We're in the child
close(out_fd[0]);
dup2(out_fd[1], STDOUT_FILENO);
close(out_fd[1]);
close(in_fd[1]);
dup2(in_fd[0], STDIN_FILENO);
close(in_fd[0]);
// Now, launch your child whichever way you want
// see eg. man 2 exec for this.
_exit(0); // If you must exit manually, use _exit, not exit.
// If you use exec, I think you don't have to. Check manpages.
}
else if (pid == -1)
; // Handle the error with fork
else
{
// You're in the parent
close(out_fd[1]);
close(in_fd[0]);
// Now you can read child's stdout with out_fd[0]
// and write to its stdin with in_fd[1].
// See man 2 read and man 2 write.
// ...
// Wait for the child to terminate (or it becomes a zombie)
int status
waitpid(pid, &status, 0);
// see man waitpid for what to do with status
}
}
Don't forget to check error codes (which I did not), and refer to man pages for details. But you see the point: when you open file descriptors (eg. via pipe), they will be available to parent and child. The parent closes one end, the child closes one other end (and redirects the first end).
Be smart and not afraid of google and man pages.
Related
I'm trying to develop a simple Unix shell simulation. The part that i think I'm currently stuck on is understanding how execvp works. I've tried to do some research into it but not having much luck.
Basically I'm making a child process with fork(), and a pipe. I'm redirecting the STDOUT to my pipe's write fd. And lastly I'm calling my execvp from my child process. My execvp() call is just a program that outputs a simple string.
I am then trying to read this string from the parent and use that as input for my second execvp() program. This program simply outputs whatever it reads in.
The issue I'm having is my first program is being called from execvp(), and then it's just outputting the string straight to the terminal.
I guess my question is, does a program executed with execvp() not carry the STDOUT (my pipes write end) of the program is was created in? And if not, how would I go about forcing that to write into my pipe instead of terminal? Can I put my execvp inside a write call?
else if(commandLine[i] == "|"){
pid_t pid2;
pid2 = fork();
t = i;
if (pid2 == 0){ // in child of child
close(pipe1fd[0]);
dup2(pipe1fd[1], STDOUT_FILENO);
execvp(commandLine[i-1], commandLine);
}
else{
close(pipe1fd[1]);
dup2(pipe1fd[0], STDIN_FILENO);
wait(0);
execvp(commandLine[i+1], commandLine);
goto BACK;
}
}
I am writing a baby program for practice. What I am trying to accomplish is basically a simple little GUI which displays services (for Linux); with buttons to start, stop, enable, and disable services (Much like the msconfig application "Services" tab in Windows). I am using C++ with Qt Creator on Fedora 21.
I want to create the GUI with C++, and populating the GUI with the list of services by calling bash scripts, and calling bash scripts on button clicks to do the appropriate action (enable, disable, etc.)
But when the C++ GUI calls the bash script (using system("path/to/script.sh")) the return value is only for exit success. How do I receive the output of the script itself, so that I can in turn use it to display on the GUI?
For conceptual example: if I were trying to display the output of (systemctl --type service | cut -d " " -f 1) into a GUI I have created in C++, how would I go about doing that? Is this even the correct way to do what I am trying to accomplish? If not,
What is the right way? and
Is there still a way to do it using my current method?
I have looked for a solution to this problem but I can't find information on how to return values from Bash to C++, only how to call Bash scripts from C++.
We're going to take advantage of the popen function, here.
std::string exec(char* cmd) {
FILE* pipe = popen(cmd, "r");
if (!pipe) return "ERROR";
char buffer[128];
std::string result = "";
while(!feof(pipe)) {
if(fgets(buffer, 128, pipe) != NULL)
result += buffer;
}
pclose(pipe);
return result;
}
This function takes a command as an argument, and returns the output as a string.
NOTE: this will not capture stderr! A quick and easy workaround is to redirect stderr to stdout, with 2>&1 at the end of your command.
Here is documentation on popen. Happy coding :)
You have to run the commands using popen instead of system and then loop through the returned file pointer.
Here is a simple example for the command ls -l
#include <stdio.h>
#include <stdlib.h>
int main() {
FILE *process;
char buff[1024];
process = popen("ls -l", "r");
if (process != NULL) {
while (!feof(process)) {
fgets(buff, sizeof(buff), process);
printf("%s", buff);
}
pclose(process);
}
return 0;
}
The long approach - which gives you complete control of stdin, stdout, and stderr of the child process, at the cost of fairly significant complexity - involves using fork and execve directly.
Before forking, set up your endpoints for communication - pipe works well, or socketpair. I'll assume you've invoked something like below:
int childStdin[2], childStdout[2], childStderr[2];
pipe(childStdin);
pipe(childStdout);
pipe(childStderr);
After fork, in child process before execve:
dup2(childStdin[0], 0); // childStdin read end to fd 0 (stdin)
dup2(childStdout[1], 1); // childStdout write end to fd 1 (stdout)
dup2(childStderr[1], 2); // childStderr write end to fd 2 (stderr)
.. then close all of childStdin, childStdout, and childStderr.
After fork, in parent process:
close(childStdin[0]); // parent cannot read from stdin
close(childStdout[1]); // parent cannot write to stdout/stderr
close(childStderr[1]);
Now, your parent process has complete control of the std i/o of the child process - and must safely multiplex childStdin[1], childStdout[0], and childStderr[0], while also monitoring for SIGCLD and eventually using a wait-series call to check the process termination code. pselect is particularly good for dealing with SIGCLD while dealing with std i/o asynchronously. See also select or poll of course.
If you want to merge the child's stdout and stderr, just dup2(childStdout[1], 2) and get rid of childStderr entirely.
The man pages should fill in the blanks from here. So that's the hard way, should you need it.
Okay, as a part of my Lib i need a 'Worker' application to run an external program.
Normally i would do it with a call to:
system("");
But this time what is needed is:
Return code of that program
Application to work while the executed program is running
So a pseudocode would look like this in perfect implementation:
CTask::Run()
{
m_iReturnCode = -1;
ExecuteTask(m_strBinaryName);
while(Task_Executing)
{
HeartBeat();
}
return m_iReturnCode;
}
Just to clarify, i am running this on Unix platforms.
What are my options here, popen / fork ?
Anyone having a good solution already running and can shed a bit of light on this please?
Thanks for any input into this.
I am using a linux system, boost for threading and a pipe to execute the command and get its result (if you do not know boost you certainly should have a look at it).
I found the hint to use a pipe here on stackoverflow but I am sorry I do not know the exact question any more.
I do not add the outside thread code. Just start the method execute within its own thread.
std::string execute()
{
std::string result;
// DO NOT INTERRUPT THREAD WHILE READING FROM PIPE
boost::this_thread::disable_interruption di;
// add echo of exit code to command to get the exit code
std::string command = mp_command + "; echo $?";
// open pipe, execute command and read input from pipe
FILE* pipe = popen(command.c_str(), "r");
if (pipe)
{
char buffer[128];
while (!feof(pipe))
{
if (fgets(buffer, 128, pipe) != NULL)
{
std::string currBuffer(buffer);
result += currBuffer;
}
}
}
else
{
mp_isValid = false;
}
// sleeping busy wait for the pipe to close
while (pclose(pipe) == -1)
{
boost::this_thread::sleep(boost::posix_time::milliseconds(100));
}
return result;
}
You can create a fork with fork() (or clone() if you want threads), and then run the program using execve() or system() in one process, and continue running the original program in the other.
For the return code, you can get the return code even from system() call as :
ret = system("<your_command>");
printf("%d\n", WEXITSTATUS(ret));
There must be some sort of interprocess or interthread communication. In case you don't want to fork it or use threads, you can try using a shared file. Open the file for writing in the child task (called by system()) and when you are done, write some value (e.g. "finished") and close the file. In the parent task, heartbeat until you read "finished" from the shared file.
You can do this also by writing a global variable instead of shared file.
However, I would fort or thread it, using a shared file or global variable is error-prone and I am not entirely sure it would work that way.
I am trying to get the output of Tcl Interpreter as described in answer of this question Tcl C API: redirect stdout of embedded Tcl interp to a file without affecting the whole program. Instead of writing the data to file I need to get it using pipe. I changed Tcl_OpenFileChannel to Tcl_MakeFileChannel and passed write-end of pipe to it. Then I called Tcl_Eval with some puts. No data came at read-end of the pipe.
#include <sys/wait.h>
#include <assert.h>
#include <stdio.h>
#include <stdlib.h>
#include <unistd.h>
#include <string.h>
#include <fcntl.h>
#include <tcl.h>
#include <iostream>
int main() {
int pfd[2];
if (pipe(pfd) == -1) { perror("pipe"); exit(EXIT_FAILURE); }
/*
int saved_flags = fcntl(pfd[0], F_GETFL);
fcntl(pfd[0], F_SETFL, saved_flags | O_NONBLOCK);
*/
Tcl_Interp *interp = Tcl_CreateInterp();
Tcl_Channel chan;
int rc;
int fd;
/* Get the channel bound to stdout.
* Initialize the standard channels as a byproduct
* if this wasn't already done. */
chan = Tcl_GetChannel(interp, "stdout", NULL);
if (chan == NULL) {
return TCL_ERROR;
}
/* Duplicate the descriptor used for stdout. */
fd = dup(1);
if (fd == -1) {
perror("Failed to duplicate stdout");
return TCL_ERROR;
}
/* Close stdout channel.
* As a byproduct, this closes the FD 1, we've just cloned. */
rc = Tcl_UnregisterChannel(interp, chan);
if (rc != TCL_OK)
return rc;
/* Duplicate our saved stdout descriptor back.
* dup() semantics are such that if it doesn't fail,
* we get FD 1 back. */
rc = dup(fd);
if (rc == -1) {
perror("Failed to reopen stdout");
return TCL_ERROR;
}
/* Get rid of the cloned FD. */
rc = close(fd);
if (rc == -1) {
perror("Failed to close the cloned FD");
return TCL_ERROR;
}
chan = Tcl_MakeFileChannel((void*)pfd[1], TCL_WRITABLE | TCL_READABLE);
if (chan == NULL)
return TCL_ERROR;
/* Since stdout channel does not exist in the interp,
* this call will make our file channel the new stdout. */
Tcl_RegisterChannel(interp, chan);
rc = Tcl_Eval(interp, "puts test");
if (rc != TCL_OK) {
fputs("Failed to eval", stderr);
return 2;
}
char buf;
while (read(pfd[0], &buf, 1) > 0) {
std::cout << buf;
}
}
I've no time at the moment to tinker with the code (might do that later) but I think this approach is flawed as I see two problems with it:
If stdout is connected to something which is not an interactive console (a call to isatty(2) is usually employed by the runtime to check for that), full buffering could be (and I think will be) engaged, so unless your call to puts in the embedded interpreter outputs so many bytes as to fill up or overflow the Tcl's channel buffer (8KiB, ISTR) and then the downstream system's buffer (see the next point), which, I think, won't be less than 4KiB (the size of a single memory page on a typical HW platform), nothing will come up at the read side.
You could test this by changing your Tcl script to flush stdout, like this:
puts one
flush stdout
puts two
You should then be able to read the four bytes output by the first puts from the pipe's read end.
A pipe is two FDs connected via a buffer (of a defined but system-dependent size). As soon as the write side (your Tcl interp) fills up that buffer, the write call which will hit the "buffer full" condition will block the writing process unless something reads from the read end to free up space in the buffer. Since the reader is the same process, such a condition has a perfect chance to deadlock since as soon as the Tcl interp is stuck trying to write to stdout, the whole process is stuck.
Now the question is: could this be made working?
The first problem might be partially fixed by turning off buffering for that channel on the Tcl side. This (supposedly) won't affect buffering provided for the pipe by the system.
The second problem is harder, and I can only think of two possibilities to fix it:
Create a pipe then fork(2) a child process ensuring its standard output stream is connected to the pipe's write end. Then embed the Tcl interpreter in that process and do nothing to the stdout stream in it as it will be implicitly connected to the child process standard output stream attached, in turn, to the pipe. You then read in your parent process from the pipe until the write side is closed.
This approach is more robust than using threads (see the next point) but it has one potential downside: if you need to somehow affect the embedded Tcl interpreter in some ways which are not known up front before the program is run (say, in response to the user's actions), you will have to set up some sort of IPC between the parent and the child processes.
Use threading and embed the Tcl interp into a separate thread: then ensure that reads from the pipe happen in another (let's call it "controlling") thread.
This approach might superficially look simpler than forking a process but then you get all the hassles related to proper synchronization common for threading. For instance, a Tcl interpreter must not be accessed directly from threads other than the one in which the interp was created. This implies not only concurrent access (which is kind of obvious by itself) but any access at all, including synchronized, because of possible TLS issues. (I'm not exactly sure this holds true, but I have a feeling this is a big can of worms.)
So, having said all that, I wonder why you seem to systematically reject suggestions to implement a custom "channel driver" for your interp and just use it to provide the implementation for the stdout channel in your interp? This would create a super-simple single-thread fully-synchronized implementation. What's wrong with this approach, really?
Also observe that if you decided to use a pipe in hope it will serve as a sort of "anonymous file", then this is wrong: a pipe assumes both sides work in parallel. And in your code you first make the Tcl interp write everything it has to write and then try to read this. This is asking for trouble, as I've described, but if this was invented just to not mess with a file, then you're just doing it wrong, and on a POSIX system the course of actions could be:
Use mkstemp() to create and open a temporary file.
Immediately delete it using the name mkstemp() returned in place of the template you passed it.
Since the file still has an open FD for it (returned by mkstemp()), it will disappear from the file system but will not be unlinked, and might be written to and read from.
Make this FD an interp's stdout. Let the interp write everything it has to.
After the interp is finished, seek() the FD back to the beginning of the file and read from it.
Close the FD when done — the space it occupied on the underlying filesystem will be reclamied.
I developing on the Linux platform.
I want to create a new proccess in my library without replacing the current executing image.
Because I am developing a library, I don't have a main function.
And I want to continue the new process after the invoker application closes (Just like CreateProcess Windows API).
Is it possible in Linux or not?
something like this function:
void Linux_CreateProcess(const char* app_name)
{
// Executing app_name.
// ???????? what is the code ??????
// app_name is running and never close if current application close.
return;
}
Note:
system() blocks the current process, it is not good. I want to continue the current process.
exec() family replace the current executing image, it is not good.
popen() closes the new process if the current process closed.
The fork/exec combination was already mentioned, but there is also the posix_spawn family of functions that can be used as a replacement for fork + exec and is a more direct equivalent to CreateProcess. Here is an example for both possibilities:
#include <stdlib.h>
#include <stdio.h>
#include <string.h>
#include <unistd.h>
#include <spawn.h>
#include <sys/wait.h>
extern char **environ;
void test_fork_exec(void);
void test_posix_spawn(void);
int main(void) {
test_fork_exec();
test_posix_spawn();
return EXIT_SUCCESS;
}
void test_fork_exec(void) {
pid_t pid;
int status;
puts("Testing fork/exec");
fflush(NULL);
pid = fork();
switch (pid) {
case -1:
perror("fork");
break;
case 0:
execl("/bin/ls", "ls", (char *) 0);
perror("exec");
break;
default:
printf("Child id: %i\n", pid);
fflush(NULL);
if (waitpid(pid, &status, 0) != -1) {
printf("Child exited with status %i\n", status);
} else {
perror("waitpid");
}
break;
}
}
void test_posix_spawn(void) {
pid_t pid;
char *argv[] = {"ls", (char *) 0};
int status;
puts("Testing posix_spawn");
fflush(NULL);
status = posix_spawn(&pid, "/bin/ls", NULL, NULL, argv, environ);
if (status == 0) {
printf("Child id: %i\n", pid);
fflush(NULL);
if (waitpid(pid, &status, 0) != -1) {
printf("Child exited with status %i\n", status);
} else {
perror("waitpid");
}
} else {
printf("posix_spawn: %s\n", strerror(status));
}
}
posix_spawn is probably the preferred solution these days.
Before that fork() and then execXX() was the way to do this (where execXX is one of the exec family of functions, including execl, execlp, execle, execv, execvp, and execvpe). In the GNU C library currently, at least for Linux, posix_spawn is implemented via fork/exec anyway; Linux doesn't have a posix_spawn system call.
You would use fork() (or vfork()) to launch a separate process, which will be a clone of the parent. In both the child and parent process, execution continues, but fork returns a different value in either case allowing you to differentiate. You can then use one of the execXX() functions from within the child process.
Note, however, this problem - text borrowed from one of my blog posts (http://davmac.wordpress.com/2008/11/25/forkexec-is-forked-up/):
There doesn’t seem to be any simple standards-conformant way (or even a generally portable way) to execute another process in parallel and be certain that the exec() call was successful. The problem is, once you’ve fork()d and then successfully exec()d you can’t communicate with the parent process to inform that the exec() was successful. If the exec() fails then you can communicate with the parent (via a signal for instance) but you can’t inform of success – the only way the parent can be sure of exec() success is to wait() for the child process to finish (and check that there is no failure indication) and that of course is not a parallel execution.
i.e. if execXX() succeeds, you no longer have control so can't signal success to the original (parent) process.
A potential solution to this problem, in case it is an issue in your case:
[...] use pipe() to create a pipe, set the output end to be close-on-exec, then fork() (or vfork()), exec(), and write something (perhaps errno) to the pipe if the exec() fails (before calling _exit()). The parent process can read from the pipe and will get an immediate end-of-input if the exec() succeeds, or some data if the exec() failed.
(Note that this solution through is prone to causing priority inversion if the child process runs at a lower priority than the parent, and the parent waits for output from it).
There is also posix_spawn as mentioned above and in other answers, but it doesn't resolve the issue of detecting failure to execute the child executable, since it is often implemented in terms of fork/exec anyway and can return success before the exec() stage fails.
You wrote:
I want to create a new proccess in my library without replacing the current executing image.
system() blocks the current process, it is not good. I want to continue current process.
Just add an ampersand after the command call.
Example: system("/bin/my_prog_name &");
Your process will not be blocked!
The classic way to do this is to use fork() to create a child process, and then use one of the exec() functions to replace the executing image of the child, leaving the parent untouched. Both process will then run in parallel.
I think posix_spawn does what you want. Internally it might do fork/exec, but maybe it also does some funky useful stuff.
You should be using fork() and then execvp().
fork() function creates a new child process. In the parent process you receive the process ID of the child process. In Child process the process ID returned is 0, which tells us that the process is a child process.
execvp() replaces the calling process image with a new process image. This has the effect of running a new program with the process ID of the calling process. Note that a new process is not started; the new process image simply overlays the original process image. The execvp function is most commonly used to overlay a process image that has been created by a call to the fork function.
Yes, fork() and exec..() is the correct solution. Look at this code if it can help you :
switch( fork() )
{
case -1 : // Error
// Handle the error
break;
case 0 :
// Call one of the exec -- personally I prefer execlp
execlp("path/to/binary","binary name", arg1, arg2, .., NULL);
exit(42); // May never be returned
break;
default :
// Do what you want
break;
}
I think fork is what you are looking for.