Boost Process : How do I redirect process output to a file? - boost-process

Launching a process using the Boost::Process library, I have no problem reading output from stdout. However, if I instead wanted to redirect stdout to a file, how would I go about doing that?

Guessing that you use boost.process 0.5 you would do it that way:
boost::iostream::file_descriptor_sink fl("my_file");
namespace bp = boost::process;
bp::child c = bp::execute(bp::set_cmd("ls"), bind_stdout(fl.handle()));
If you want to use boost.process 0.6 (which I'd recommend), you just write:
bp::child c("ls", bp::stdout > "my_file");

Related

Simplistic way to send data to another process (win)?

Suppose you are developing two application for the windows platform (A and B).
The platform/system is Windows (Windows 10 if that matters)
How can you send some piece of information to B from A if you are only allowed to work at the c++ language level (that is: including the standard libraries and STL) ? This rules out any third party libraries.
I'm trying to avoid the system API as it usually involves a healthy amount of c-like programming (and therefore is not suited for my purpose).
In this particular scenario both processes are running continuously and the sending happens due to some outside event (if it matters) - so some kind of sync is probably needed.
Possible solutions under consideration:
Using files, via std::ofstream and std::ifstream could be a possible solution (albeit a crude one) ? - but how can sync'ing be achieved then ?
Even redirecting STDOUT to STDIN could be fine - especially if there is some simple way to set this up (eg. one-liner on command line to start - powershell could be a possibility if needed)
A solution involving transfer via a datafile (this uses std::filesystem::rename as a way of sync'ing or you could say avoid it):
a.exe (writer)
#include <filesystem>
auto tmpfile = std::filesystem::temp_directory_path() / "some_uuid.txt";
auto datafile = std::filesystem::temp_directory_path() / "data.txt";
std::ofstream(tmpfile) << "hello" << std::endl;
std::filesystem::rename(tmpfile, datafile);
b.exe (reader)
auto datafile = std::filesystem::temp_directory_path() / "data.txt";
while (!std::filesystem::exists(datafile)) {
;//we have nothing else to do ?
}
std::ifstream input(data);
//read input etc.

Returning output from bash script to calling C++ function

I am writing a baby program for practice. What I am trying to accomplish is basically a simple little GUI which displays services (for Linux); with buttons to start, stop, enable, and disable services (Much like the msconfig application "Services" tab in Windows). I am using C++ with Qt Creator on Fedora 21.
I want to create the GUI with C++, and populating the GUI with the list of services by calling bash scripts, and calling bash scripts on button clicks to do the appropriate action (enable, disable, etc.)
But when the C++ GUI calls the bash script (using system("path/to/script.sh")) the return value is only for exit success. How do I receive the output of the script itself, so that I can in turn use it to display on the GUI?
For conceptual example: if I were trying to display the output of (systemctl --type service | cut -d " " -f 1) into a GUI I have created in C++, how would I go about doing that? Is this even the correct way to do what I am trying to accomplish? If not,
What is the right way? and
Is there still a way to do it using my current method?
I have looked for a solution to this problem but I can't find information on how to return values from Bash to C++, only how to call Bash scripts from C++.
We're going to take advantage of the popen function, here.
std::string exec(char* cmd) {
FILE* pipe = popen(cmd, "r");
if (!pipe) return "ERROR";
char buffer[128];
std::string result = "";
while(!feof(pipe)) {
if(fgets(buffer, 128, pipe) != NULL)
result += buffer;
}
pclose(pipe);
return result;
}
This function takes a command as an argument, and returns the output as a string.
NOTE: this will not capture stderr! A quick and easy workaround is to redirect stderr to stdout, with 2>&1 at the end of your command.
Here is documentation on popen. Happy coding :)
You have to run the commands using popen instead of system and then loop through the returned file pointer.
Here is a simple example for the command ls -l
#include <stdio.h>
#include <stdlib.h>
int main() {
FILE *process;
char buff[1024];
process = popen("ls -l", "r");
if (process != NULL) {
while (!feof(process)) {
fgets(buff, sizeof(buff), process);
printf("%s", buff);
}
pclose(process);
}
return 0;
}
The long approach - which gives you complete control of stdin, stdout, and stderr of the child process, at the cost of fairly significant complexity - involves using fork and execve directly.
Before forking, set up your endpoints for communication - pipe works well, or socketpair. I'll assume you've invoked something like below:
int childStdin[2], childStdout[2], childStderr[2];
pipe(childStdin);
pipe(childStdout);
pipe(childStderr);
After fork, in child process before execve:
dup2(childStdin[0], 0); // childStdin read end to fd 0 (stdin)
dup2(childStdout[1], 1); // childStdout write end to fd 1 (stdout)
dup2(childStderr[1], 2); // childStderr write end to fd 2 (stderr)
.. then close all of childStdin, childStdout, and childStderr.
After fork, in parent process:
close(childStdin[0]); // parent cannot read from stdin
close(childStdout[1]); // parent cannot write to stdout/stderr
close(childStderr[1]);
Now, your parent process has complete control of the std i/o of the child process - and must safely multiplex childStdin[1], childStdout[0], and childStderr[0], while also monitoring for SIGCLD and eventually using a wait-series call to check the process termination code. pselect is particularly good for dealing with SIGCLD while dealing with std i/o asynchronously. See also select or poll of course.
If you want to merge the child's stdout and stderr, just dup2(childStdout[1], 2) and get rid of childStderr entirely.
The man pages should fill in the blanks from here. So that's the hard way, should you need it.

How to set stderr for daemon C++

I am using some third party library which is using stderr to print error and it don't gives any callback for logging.
I am using linux daemon call to create daemon of the process.
Is there a way I can set stderr to file after daemon call ?
Use the open system call to open the file, then do this: dup2(filefd, 2). That will set stderr to the opened file. You can then close(filefd). You can do the open before calling daemon, but I wouldn't recommend the dup2 and subsequent close until after calling daemon.
In the code using the third party library, you can "reroute" the stderr stream.
e.g. something like:
std::ofstream outputFileStream;
outputFileStream.open ("outputfile.txt");
std::streambuf * yourStreamBuffer = outputFileStream.rdbuf();
std::cerr.rdbuf(yourStreamBuffer);
std::cerr << "Ends up in the file, not std::cerr!";
outputFileStream.close();

Pipes between Python and C++ don't get closed

I am spawning a process in python using subprocess and want to read output from the program using pipes. The C++ program does not seem to close the pipe though, even when explicitly telling it to close.
#include <cstdlib>
#include <ext/stdio_filebuf.h>
#include <iostream>
int main(int argc, char **argv) {
int fd = atoi(argv[1]);
__gnu_cxx::stdio_filebuf<char> buffer(fd, std::ios::out);
std::ostream stream(&buffer);
stream << "Hello World" << std::endl;
buffer.close();
return 0;
}
I invoke this small program with this python snippet:
import os
import subprocess
read, write = os.pipe()
proc = subprocess.Popen(["./dummy", str(write)])
data = os.fdopen(read, "r").read()
print data
The read() method does not return, as the fd is not closed. Opening and closing the write fd in python solves the problem. But it seems like a hack to me. Is there a way to close the fd in my C++ process?
Thanks a lot!
Spawning a child process on Linux (all POSIX OSes, really) is usually accomplished via fork and exec. After fork, both processes have the file open. The C++ process closes it, but the file remains open until the parent process closes the fd also. This is normal for code using fork, and usually is handled by a wrapper around fork. Read the man page for pipe. I guess python has no way of knowing which files are being transferred to the child, though, and therefore doesn't know what to close in the parent vs the child process.
POSIX file descriptors are local to the process. The file descriptor write from the Python tool is not valid in the C++ process.
Perhaps the easiest way would be to have the C++ process write its output to stdout (like cout <<), and Python call Popen using stdout=PIPE and read proc.stdout (or use proc.communicate() instead of using fdopen. This should work in Windows, too.
For passing the file descriptor as a command-line argument, see Ben Voigt's answer.

Redirecting standard output to syslog

I'm planning to package OpenTibia Server for Debian. One of the things I want to do is add startup via /etc/init.d and daemonization of the otserv process.
Thing is, we should probably redirect output to syslog. This is usually done via the syslog() function. Currently, the code is swarmed with:
std::cout << "Stuff to printout" << std::endl;
Is there a proper, easy to add, way to redirect standard output and standard error output into syslog without replacing every single "call" to std::cout and friends?
You can pipe your stdout to syslog with the logger command:
NAME
logger - a shell command interface to the syslog(3) system log module
SYNOPSIS
logger [-isd] [-f file] [-p pri] [-t tag] [-u socket] [message ...]
DESCRIPTION
Logger makes entries in the system log. It provides a shell command
interface to the syslog(3) system log module.
If you don't supply a message on the command line it reads stdin
You can redirect any stream in C++ via the rdbuf() command. This is a bit convoluted to implement but not that hard.
You need to write a streambuf that would output to syslog on overflow(), and replace the std::cout rdbuf with your streambuf.
An example, that would output to a file (no error handling, untested code)
#include <iostream>
#include <fstream>
using namespace std;
int main (int argc, char** argv) {
streambuf * yourStreamBuffer = NULL;
ofstream outputFileStream;
outputFileStream.open ("theOutputFile.txt");
yourStreamBuffer = outputFileStream.rdbuf();
cout.rdbuf(yourStreamBuffer);
cout << "Ends up in the file, not std::cout!";
outputFileStream.close();
return 0;
}
Not sure whether a straight "C" answer suffices; but in "C" you can use underlying stdio features to plug the (FILE*) directly into syslog calls, without an intervening "logger" process. Check out
http://mischasan.wordpress.com/2011/05/25/redirecting-stderr-to-syslog/
Try wrapping the execution of the binary with a suitable script, that just reads stdout and stderr, and send any data read from them on using syslog(). That should work without any code changes in the wrapped application, and be pretty easy.
Not sure if there are existing scripts to pipe into, but writing one shouldn't be hard if not.
I just wrote some code that will do this. It's using ASL instead of syslog, and it's using kevents, so you may need to port it to different APIs for your system (syslog instead of ASL and poll/select instead of kevent)
http://cgit.freedesktop.org/xorg/app/xinit/tree/launchd/console_redirect.c
Furthermore, I basically added this to libsystem_asl on Mountain Lion. Check out the man page for asl_log_descriptor.
Example:
#include <asl.h>
#include <stdio.h>
#include <stdlib.h>
#include <unistd.h>
int main() {
asl_log_descriptor(NULL, NULL, ASL_LEVEL_INFO, STDOUT_FILENO, ASL_LOG_DESCRIPTOR_WRITE);
asl_log_descriptor(NULL, NULL, ASL_LEVEL_NOTICE, STDERR_FILENO, ASL_LOG_DESCRIPTOR_WRITE);
fprintf(stdout, "This is written to stdout which will be at log level info.");
fprintf(stderr, "This is written to stderr which will be at log level notice.");
return 0;
}