why can't I pipe output from both execl calls? - c++

Possible duplicates:
How to call execl() in C with the proper arguments?
Grabbing output from exec
Linux Pipes as Input and Output
Using dup2 for piping
Piping for input/output
I've been trying to learn piping in Linux using dup/dup2 and fork the last 3 days. I think I got the hang of it, but when I call two different programs from the child process, it seems that I am only capturing output from the first one called. I don't understand why that is and/or what I'm doing wrong. This is my primary question.
Edit: I think a possible solution is to fork another child and set up pipes with dup2, but I'm more just wondering why the code below doesn't work. What I mean is, I would expect to capture stderr from the first execl call and stdout from the second. This doesn't seem to be happening.
My second question is if I am opening and closing the pipes correctly. If not, I would like to know what I need to add/remove/change.
Here is my code:
#include <stdlib.h>
#include <iostream>
#include <time.h>
#include <sys/wait.h>
#define READ_END 0
#define WRITE_END 1
void parentProc(int* stdoutpipe, int* stderrpipe);
void childProc(int* stdoutpipe, int* stderrpipe);
int main(){
pid_t pid;
int status;
int stdoutpipe[2]; // pipe
int stderrpipe[2]; // pipe
// create a pipe
if (pipe(stdoutpipe) || pipe(stderrpipe)){
std::cerr << "Pipe failed." << std::endl;
return EXIT_FAILURE;
}
// fork a child
pid = fork();
if (pid < 0) {
std::cerr << "Fork failed." << std::endl;
return EXIT_FAILURE;
}
// child process
else if (pid == 0){
childProc(stdoutpipe, stderrpipe);
}
// parent process
else {
std::cout<< "waitpid: " << waitpid(pid, &status, 0)
<<'\n'<<std::endl;
parentProc(stdoutpipe, stderrpipe);
}
return 0;
}
void childProc(int* stdoutpipe, int* stderrpipe){
dup2(stdoutpipe[WRITE_END], STDOUT_FILENO);
close(stdoutpipe[READ_END]);
dup2(stderrpipe[WRITE_END], STDERR_FILENO);
close(stderrpipe[READ_END]);
execl("/bin/bash", "/bin/bash", "foo", NULL);
execl("/bin/ls", "ls", "-1", (char *)0);
// execl("/home/me/outerr", "outerr", "-1", (char *)0);
//char * msg = "Hello from stdout";
//std::cout << msg;
//msg = "Hello from stderr!";
//std::cerr << msg << std::endl;
// close write end now?
}
void parentProc(int* stdoutpipe, int* stderrpipe){
close(stdoutpipe[WRITE_END]);
close(stderrpipe[WRITE_END]);
char buffer[256];
char buffer2[256];
read(stdoutpipe[READ_END], buffer, sizeof(buffer));
std::cout << "stdout: " << buffer << std::endl;
read(stderrpipe[READ_END], buffer2, sizeof(buffer));
std::cout << "stderr: " << buffer2 << std::endl;
// close read end now?
}
When I run this, I get the following output:
yfp> g++ selectTest3.cpp; ./a.out
waitpid: 21423
stdout: hB�(6
stderr: foo: line 1: -bash:: command not found
The source code for the "outerr" binary (commented out above) is simply:
#include <iostream>
int main(){
std::cout << "Hello from stdout" << std::endl;
std::cerr << "Hello from stderr!" << std::endl;
return 0;
}
When I call "outerr," instead of ls or "foo" I get the following output, which I would expect:
yfp> g++ selectTest3.cpp; ./a.out
waitpid: 21439
stdout: Hello from stdout
stderr: Hello from stderr!

On execl
Once you successfully call execl or any other function from the exec family, the original process is completely overwritten by the new process. This implies that the new process never "returns" to the old one. If you have two execl calls in a row, the only way the second one can be executed is if the first one fails.
In order to run two different commands in a row, you have to fork one child to run the first command, wait, fork a second child to run the second command, then (optionally) wait for the second child too.
On read
The read system call does not append a terminating null, so in general you need to look at the return value, which tells you the number of bytes actually read. Then set the following character to null to get a C string, or use the range constructor for std::string.
On pipes
Right now you are using waitpid to wait until the child process has already finished, then reading from the pipes. The problem with this is that if the child process produces a lot of output, then it will block because the pipe gets full and the parent process is not reading from it. The result will be a deadlock, as the child waits for the parent to read, and the parent waits for the child to terminate.
What you should do is use select to wait for input to arrive on either the child's stdout or the child's stderr. When input arrives, read it; this will allow the child to continue. When the child process dies, you'll know because you'll get end of file on both. Then you can safely call wait or waitpid.

The exec family of functions replace the current process image with a new process image. When you execute,
execl("/bin/bash", "/bin/bash", "foo", NULL);
the code from the current process is not executed any more. That's why you never see the result of executing
execl("/bin/ls", "ls", "-1", (char *)0);

Related

How to redirect program output as its input

I've written a simple C++ program for tutorial purposes.
My goal is to loop it infinitely.
#include <iostream>
#include <string>
int main()
{
std::cout << "text";
for(;;) {
std::string string_object{};
std::getline(std::cin, string_object);
std::cout << string_object;
}
return 0;
}
After compilation I run it like this:
./bin 0>&1
What I expected to happen is that the "text" that is output to stdout, and it will now become also stdin for the program and it will loop forever. Why doesn't it happen?
First, you need to output newlines when printing to std::cout, otherwise std::getline() won't have any complete line to read.
Improved version:
#include <iostream>
#include <string>
int main()
{
std::cout << "stars" << std::endl;
for(;;) {
std::string string_object;
std::getline(std::cin, string_object);
std::cout << string_object << std::endl;
}
return 0;
}
Now try this:
./bin >file <file
you don't see any output, because it's going to the file. But if you stop the program and look at the file, behold, it's full of
stars
stars
stars
stars
:-)
Also, the reason that the feedback loop cannot start when you try
./bin 0>&1
is, that you end up with both stdin and stdout connected to /dev/tty
(meaning that you can see the output).
But a TTY device cannot ever close the loop, because it actually consists of two separate channels, one passing the output to the terminal, one passing the terminal input to the process.
If you use a regular file for in- and output, the loop can be closed. Every byte written to the file will be read from it as well, if the stdin of the process is connected to it. That's as long as no other process reads from the file simultaneously, because each byte in a stream can be only read once.
Since you're using gcc, I'm going to assume you have pipe available.
#include <cstring>
#include <iostream>
#include <unistd.h>
int main() {
char buffer[1024];
std::strcpy(buffer, "test");
int fd[2];
::pipe(fd);
::dup2(fd[1], STDOUT_FILENO);
::close(fd[1]);
::dup2(fd[0], STDIN_FILENO);
::close(fd[0]);
::write(STDOUT_FILENO, buffer, 4);
while(true) {
auto const read_bytes = ::read(STDIN_FILENO, buffer, 1024);
::write(STDOUT_FILENO, buffer, read_bytes);
#if 0
std::cerr.write(buffer, read_bytes);
std::cerr << "\n\tGot " << read_bytes << " bytes" << std::endl;
#endif
sleep(2);
}
return 0;
}
The #if 0 section can be enabled to get debugging. I couldn't get it to work with std::cout and std::cin directly, but somebody who knows more about the low-level stream code could probably tweak this.
Debug output:
$ ./io_loop
test
Got 4 bytes
test
Got 4 bytes
test
Got 4 bytes
test
Got 4 bytes
^C
Because the stdout and stdin don't create a loop. They may point to the same tty, but a tty is actually two separate channels, one for input and one for output, and they don't loop back into one another.
You can try creating a loop by running your program with its stdin connected to the read end of a pipe, and with its stdout to its write end. That will work with cat:
mkfifo fifo
{ echo text; strace cat; } <>fifo >fifo
...
read(0, "text\n", 131072) = 5
write(1, "text\n", 5) = 5
read(0, "text\n", 131072) = 5
write(1, "text\n", 5) = 5
...
But not with your program. That's because your program is trying to read lines, but its writes are not terminated by a newline. Fixing that and also printing the read line to stderr (so we don't have to use strace to demonstrate that anything happens in your program), we get:
#include <iostream>
#include <string>
int main()
{
std::cout << "text" << std::endl;
for(;;) {
std::string string_object{};
std::getline(std::cin, string_object);
std::cerr << string_object << std::endl;
std::cout << string_object << std::endl;
}
}
g++ foo.cc -o foo
mkfifo fifo; ./foo <>fifo >fifo
text
text
text
...
Note: the <>fifo way of opening a named pipe (fifo) was used in order to open both its read and its write end at once and so avoid blocking. Instead of reopening the fifo from its path, the stdout could simply be dup'ed from the stdin (prog <>fifo >&0) or the fifo could be first opened as a different file descriptor, and then the stdin and stdout could be opened without blocking, the first in read-only mode and the second in write-only mode (prog 3<>fifo <fifo >fifo 3>&-).
They will all work the same with the example at hand. On Linux, :|prog >/dev/fd/0 (and echo text | strace cat >/dev/fd/0) would also work -- without having to create a named pipe with mkfifo.

Check whether a command executed successfully WITHOUT system functions using fork()

fork() creates a new process by duplicating the calling process in the separate memory space. The execution of the forked process can be checked by checking the pid_t returned value by fork() function.
I used fork() to create some concurrent processes from a single parent. These processes are commands in a shell that no need to be executed.
I'm wondering how I can check whether the command is a valid command that can be executed or not without using the system functions and/or execute them.
#include <iostream>
#include <unistd.h>
#include <string>
int main(){
std::string command = "/bin/ls";
//std::string invalidCommand = "/bin/123";
pid_t pid = fork();
if(pid == -1 || `here I want to check if the command is executable without execution`){
std::cout << "Error in forking or Command is not executable" << std::endl;
}
else if (pid == 0){
std::cout << "Process has been forked and valid to execute" << std::endl;
}
return 0;
}
These processes are commands in a shell that no need to be executed.
I don't fully understand what you want to say with this sentence. However, I think you are not aware how fork() works and how system() is related to fork():
As you already found out, fork() duplicates a running process; this means that the program is now running twice and all variables exist twice (just like if you run your program twice).
system() internally uses fork() to create a copy of the process; in the newly created process it uses one of the exec() variants (such as execvp()) to replace the program in the new process by another program.
Then it uses one of the wait() variants (such as waitpid()) to wait for the new process to finish:
fflush(stdout);
fflush(stderr);
int newpid = fork();
if(newpid == 0)
{
execlp("ls", "ls", "./subdirectory", NULL);
std::cerr << "Could not start \"ls\".\n";
fflush(stderr);
_exit(1);
}
if(newpid < 0)
{
std::cerr << "Not enough memory.\n";
}
else
{
int code;
waitpid(newpid, &code, 0);
if(code == 0) std::cout << "\"ls\" was successful.";
else std::cout << "\"ls\" was not successful.";
}
If you want to have "special" behaviour (such as re-directing stdout to a file), you typically don't use the system() function but you will implement the program the way it is shown above.
I'm wondering how I can check whether the command is a valid command ...
Without running the program it is nearly impossible to find out if some command is executable:
It is possible to find out if a program with some name (e.g. "/usr/bin/ls") is existing and marked as executable using the access() function (this is what command -v or test -x do).
However, this test will not detect if a file mistakenly has the x flag set although the file is a document file and not a program. (This is often the case for files on Windows-formatted media.)
If wait() returns the value passed to the _exit() function, it is also difficult to check if the reason is that exec() failed (this means that the program could not be executed) or if the program that has been started returned the same code that we use in our _exit() function.
You can send some information from the new process to the original process if the exec() function has returned. The exec() function will never return on success. However, sending this information is not that easy: Just setting a variable will not work:
int ls_failed = 0;
int pid = fork();
if(pid == 0)
{
execlp("ls", "ls", "./subdirectory", NULL);
ls_failed = 1;
_exit(1);
}
wait(pid, NULL, 0);
if(ls_failed > 0) std::cout << "Starting \"ls\" failed.";
The two processes behave like you started the programs twice; therefore both processes have their own variables, so the variable ls_failed in the newly started process is not identical to the variable ls_failed in the original process.
std::cout << ...
Please note that std::cout probably internally performs an fwrite(...,stdout). This function will not write directly to the terminal but it will write to some buffer. When the buffer is full, all data is written at once.
When calling fork() the buffer is duplicated; when using _exit() or exec(), the data in the buffer is lost.
This may lead to weird effects:
std::cout << "We are doing some fork() now.";
int pid = fork();
if(pid == 0)
{
std::cout << "\nNow the child process is running().";
_exit(0);
}
waitpid(pid, NULL, 0);
std::cout << "\nThe child process has finished.\n";
Depending on the buffer size we could get the following output:
We are doing some fork() now.
Now the chi ng some fork() now.
The child process has finished.
Therefore, we should perform a fflush(stdout) and a fflush(stderr) before using fork(), an exec() variant or _exit() unless you know that the corresponding buffer (stdout for std::cout and stderr for std::cin) is empty.
you can use the system call wait and check the return value of the command that you attempted to execute, other than that fork would really help you. try reading the man page of wait.
https://www.man7.org/linux/man-pages/man2/wait.2.html

Is output read from popen()ed FILE* complete before pclose()?

pclose()'s man page says:
The pclose() function waits for the associated process to terminate and returns the exit status of the command as returned by wait4(2).
I feel like this means if the associated FILE* created by popen() was opened with type "r" in order to read the command's output, then you're not really sure the output has completed until after the call to pclose(). But after pclose(), the closed FILE* must surely be invalid, so how can you ever be certain you've read the entire output of command?
To illustrate my question by example, consider the following code:
// main.cpp
#include <iostream>
#include <cstdio>
#include <cerrno>
#include <cstring>
#include <sys/types.h>
#include <sys/wait.h>
int main( int argc, char* argv[] )
{
FILE* fp = popen( "someExecutableThatTakesALongTime", "r" );
if ( ! fp )
{
std::cout << "popen failed: " << errno << " " << strerror( errno )
<< std::endl;
return 1;
}
char buf[512] = { 0 };
fread( buf, sizeof buf, 1, fp );
std::cout << buf << std::endl;
// If we're only certain the output-producing process has terminated after the
// following pclose(), how do we know the content retrieved above with fread()
// is complete?
int r = pclose( fp );
// But if we wait until after the above pclose(), fp is invalid, so
// there's nowhere from which we could retrieve the command's output anymore,
// right?
std::cout << "exit status: " << WEXITSTATUS( r ) << std::endl;
return 0;
}
My questions, as inline above: if we're only certain the output-producing child process has terminated after the pclose(), how do we know the content retrieved with the fread() is complete? But if we wait until after the pclose(), fp is invalid, so there's nowhere from which we could retrieve the command's output anymore, right?
This feels like a chicken-and-egg problem, but I've seen code similar to the above all over, so I'm probably misunderstanding something. I'm grateful for an explanation on this.
TL;DR executive summary: how do we know the content retrieved with the fread() is complete? — we've got an EOF.
You get an EOF when the child process closes its end of the pipe. This can happen when it calls close explicitly or exits. Nothing can come out of your end of the pipe after that. After getting an EOF you don't know whether the process has terminated, but you do know for sure that it will never write anything to the pipe.
By calling pclose you close your end of the pipe and wait for termination of the child. When pclose returns, you know that the child has terminated.
If you call pclose without getting an EOF, and the child tries to write stuff to its end of the pipe, it will fail (in fact it wil get a SIGPIPE and probably die).
There is absolutely no room for any chicken-and-egg situation here.
Read the documentation for popen more carefully:
The pclose() function shall close a stream that was opened by popen(), wait for the command to terminate, and return the termination status of the process that was running the command language interpreter.
It blocks and waits.
I learned a couple things while researching this issue further, which I think answer my question:
Essentially: yes it is safe to fread from the FILE* returned by popen prior to pclose. Assuming the buffer given to fread is large enough, you will not "miss" output generated by the command given to popen.
Going back and carefully considering what fread does: it effectively blocks until (size * nmemb) bytes have been read or end-of-file (or error) is encountered.
Thanks to C - pipe without using popen, I understand better what popen does under the hood: it does a dup2 to redirect its stdout to the write-end of the pipe it uses. Importantly: it performs some form of exec to execute the specified command in the forked process, and after this child process terminates, its open file descriptors, including 1 (stdout) are closed. I.e. termination of the specified command is the condition by which the child process' stdout is closed.
Next, I went back and thought more carefully about what EOF really was in this context. At first, I was under the loosey-goosey and mistaken impression that "fread tries to read from a FILE* as fast as it can and returns/unblocks after the last byte is read". That's not quite true: as noted above: fread will read/block until its target number of bytes is read or EOF or error are encountered. The FILE* returned by popen comes from a fdopen of the read-end of the pipe used by popen, so its EOF occurs when the child process' stdout - which was dup2ed with the write-end of the pipe - is closed.
So, in the end what we have is: popen creating a pipe whose write end gets the output of a child process running the specified command, and whose read end if fdopened to a FILE* passed to fread. (Assuming fread's buffer is big enough), fread will block until EOF occurs, which corresponds to closure of the write end of popen's pipe resulting from termination of the executing command. I.e. because fread is blocking until EOF is encountered, and EOF occurs after command - running in popen's child process - terminates, it's safe to use fread (with a sufficiently large buffer) to capture the complete output of the command given to popen.
Grateful if anyone can verify my inferences and conclusions.
popen() is just a shortcut for series of fork, dup2, execv, fdopen, etc. It will give us access to child STDOUT, STDIN via files stream operation with ease.
After popen(), both the parent and the child process executed independently.
pclose() is not a 'kill' function, its just wait for the child process to terminate. Since it's a blocking function, the output data generated during pclose() executed could be lost.
To avoid this data lost, we will call pclose() only when we know the child process was already terminated: a fgets() call will return NULL or fread() return from blocking, the shared stream reach the end and EOF() will return true.
Here is an example of using popen() with fread(). This function return -1 if the executing process is failed, 0 if Ok. The child output data is return in szResult.
int exec_command( const char * szCmd, std::string & szResult ){
printf("Execute commande : [%s]\n", szCmd );
FILE * pFile = popen( szCmd, "r");
if(!pFile){
printf("Execute commande : [%s] FAILED !\n", szCmd );
return -1;
}
char buf[256];
//check if the output stream is ended.
while( !feof(pFile) ){
//try to read 255 bytes from the stream, this operation is BLOCKING ...
int nRead = fread(buf, 1, 255, pFile);
//there are something or nothing to read because the stream is closed or the program catch an error signal
if( nRead > 0 ){
buf[nRead] = '\0';
szResult += buf;
}
}
//the child process is already terminated. Clean it up or we have an other zoombie in the process table.
pclose(pFile);
printf("Exec command [%s] return : \n[%s]\n", szCmd, szResult.c_str() );
return 0;
}
Note that, all files operation on the return stream work on BLOCKING mode, the stream is open without O_NONBLOCK flags. The fread() can be blocked forever when the child process hang and nerver terminated, so use popen() only with trusted program.
To take more controls on child process and avoid the file blockings operation, we should use fork/vfork/execlv, etc. by ourself, modify the pipes opened attribut with O_NONBLOCK flags, use poll() or select() from time to time to determine if there are some data then use read() function to read from the pipe.
Use waitpid() with WNOHANG periodically to see if the child process was terminated.

How to send a struct over a pipe C++

I am very new to writing in c++ and am working on using pipes to communicate between processes. I have written a very simple program that works when I am sending strings or integers but when I try to send a struct (message in this case) I get null when I try to read it on the other side. Does anyone have some insight into this that they would share? Thanks for your time.
#include <unistd.h>
#include <iostream>
#include <stdlib.h>
#include <stdio.h>
#include <string.h>
#define BUFFER_LEN sizeof(message)
using namespace std;
struct message{
int from;
string msg;
};
void childCode(int *pipeOUT, int *pipeIN, message buffer){
// Local Buffer for input from pipeIN
cout << "Child: Sending Message"<< endl;
buffer.msg = "Child:I am the child!!";
write(pipeOUT[1],(char*) &buffer, BUFFER_LEN); // Test Child -> Parent comms
cout << "Child: Message Sent"<<endl;
read(pipeIN[0],(char*) &buffer,BUFFER_LEN); // Test Child <- Parent comms
cout << "Child: Recieved: "<< buffer.msg << endl;
cout << "Child Exiting..."<< endl;
exit(0); // Child process End
}
int main(int argCount, char** argVector){
pid_t pid;
int childPipeIN[2];
int childPipeOUT[2];
message buffer; // Buffer for reading from pipe
// Make Parent <- Child pipe
int ret = pipe(childPipeIN);
if (ret == -1){
perror("There was an error creating the childPipeIN. Exiting...");
exit(1);
}
// Make Parent -> Child pipe
ret = pipe(childPipeOUT);
if (ret == -1){
perror("There was an error creating the childPipeOUT. Exiting...");
exit(1);
}
// Fork off Child
pid = fork();
if (pid == -1){
perror("There has been an issue forking off the child. Exiting...");
exit(1);
}
if (pid == 0){ // Child code
cout << "Child PID = " << getpid() << endl;
childCode(childPipeIN,childPipeOUT,buffer);
}
else{ // Parent Code
cout << "Parent PID = " << getpid() << endl;
// Test Parent <- Child comms
read(childPipeIN[0], (char*) &buffer, BUFFER_LEN);
cout << "Parent: I recieved this from the child...\n" << buffer.msg << endl;
buffer.msg = "Parent: Got you message!";
// Test Parent -> Child comms
write(childPipeOUT[1], (char*) &buffer, BUFFER_LEN);
wait(null);
cout << "Parent: Children are done. Exiting..." << endl;
}
exit(0);
}
Yeah. I voted to close. Then I read Dupe more closely and realized it didn't explain the problem or the solution very well, and the solution didn't really fit with OP's intent.
The problem:
One does not simply write a std::string into a pipe. std::string is not a trivial piece of data. There are pointers there that do not sleep.
Come to think of it, it's bloody dangerous to write a std::string into anything. Including another std::string. I would not, could not with a file. This smurf is hard to rhyme, so I'll go no further with Dr. Seuss.
To another process, the pointer that references the storage containing the string's data, the magic that allows strings to be resizable, likely means absolutely nothing, and if it does mean something, you can bet it's not something you want to mess with because it certainly isn't the string's data.
Even in the same process in another std::string the two strings cannot peacefully co-exist pointing to the same memory. When one goes out of scope, resizes, or does practically anything else that mutates the string badness will ensue.
Don't believe me? Check BUFFER_LEN. No matter how big your message gets, BUFFER_LEN never changes.
This applies to everything you want to write that isn't a simple hunk of data. Integer, write away. Structure of integers and an array of characters of fixed size, write away. std::vector? No such luck. You can write std::vector::data if and only if whatever it contains is trivial.
std::is_pod may help you decide what you can and cannot read and write the easy way.
Solution:
Serialize the data. Establish a communications protocol that defines the format of the data, then use that protocol as the basis of your reading and writing code.
Typical solutions for moving a string are null terminating the buffer just like in the good ol' days of C and prepending the size of the string to the characters in the string like the good old days of Pascal.
I like the Pascal approach because it allows you to size the receiver's buffer ahead of time. With null termination you have to play a few dozen rounds of Getta-byte looking for the null terminator and hope your buffer's big enough or compound the ugliness with the dynamic allocation and copying that comes with buffer resizes.
Writing is pretty much what you are doing now, but structure member by structure member. In the above case
Write message.from to pipe.
Write length of message.msg to pipe.
Write message.msg.data() to pipe.
Two caveats:
Watch your endian! Firmly establish the byte order used by your protocol. If the native endian does not match the protocol endian, some bit shifting may be required to re-orient the message.
One man's int may be the size of another man's long so use fixed width integers.
Reading is a bit more complicated because a single call to read will return up to the requested length. It may take more than one read to get all the data you need, so you'll want a function that loops until all of the data arrives or cannot arrive because the pipe, file, socket, whatever is closed.
Loop on read until all of message.from has arrived.
Loop on read until all of the length of message.msg has arrived.
Use message.msg.resize(length) to size message.msg to hold the message.
Loop on read until all of message.msg has arrived. You can read the message directly into message.msg.data().

Executing bash shell command and extract output --> invalid file error

I want to extract the framesize of a video from a file. For this purpose, I have launched an ffmpeg command via bash shell, and I want to extract the output. This command is working well in the bash shell, and returns the output as wanted.
ffprobe -v error -count_frames -of flat=s=_ -select_streams v:0 -show_entries stream=nb_read_frames /home/peter/DA/videos/IMG-2014-1-10-10-4-37.avi
I want to call it via C++ and read out the result. I use the IDE Qt 4.8.6 with GCC 4.8 compiler.
For my code, I use this template:
executing shell command with popen
and changed it for my demands to
#include <iostream>
#include <string>
#include <stdio.h>
using namespace std;
int main()
{
FILE* pipe = popen("echo $(ffprobe -v error -count_frames -of flat=s=_ -select_streams v:0 -show_entries stream=nb_read_frames /home/peter/DA/videos/IMG-2014-1-10-10-4-37.avi)", "r");
if(!pipe)
{
cout << "error" << endl;
return 1;
}
char* buffer = new char[512];
string result;
fgets(buffer, sizeof(buffer), pipe) ;
while(!feof(pipe))
{
if(fgets(buffer, sizeof(buffer), pipe) != NULL)
{
cout << buffer << endl;
result += buffer;
}
}
pclose(pipe);
cout << result<< endl;
return 0;
}
The Qt console returned me this warning, and it is rending with return 0:
/home/peter/DA/videos/IMG-2014-1-10-10-4-37.avi: Invalid data found when processing input
and "pipe" is empty.
When I compile the main.cpp file above with g++ in the shell it works nice too.
Old post, but as I see, there are two points here:
Error "Invalid data found when processing input"
That's an ffprobe normal file processing error. Usually it happens when there are errors inside media file, it is not related to c++ program.
ffprobe writes warning/error messages into stderr stream, but popen only captures stdout stream, that's why your program couldn't get that error message trough the pipe.
How get the stdout+stderr in my program
popen allows execute any shell command, so we can use it to redirect stderr into stdout, so your program can get that output too, like this:
FILE *pipe = popen("ffprobe ... 2>&1");
The 2> redirect handle#2 output into current &1 handle#1 output (#1=stdout, #2=stderr).
There's absolute no need to execute FILE *pipe = popen("echo $(ffprobe ...)");, because the final result will be the same: Note that $(...) returns a string with stdout command output, and echo prints it. Totally redundant.
A few observations in order to improve your code:
When a string is too big to be displayed in one screen width, it's better split it into multiple lines (maybe grouping text inside each line within some logic), because that will improve the reading of your code by other people (and eventually by yourself in a few months).
You can do this with a C/C++ compiler feature that concatenates strings separated by spaces (newlines, tab, etc.), ex. "hi " "world" is the same as "hi world" to the compiler.
When your program have to write error messages, use the stderr stream. In c++ that's std::cerr instead std::cout.
Always free memory allocated when it's no loger used (each new has to have a delete)
Avoid use using namespace std;, instead use using std::name; for each standard instance/class that you'll use. Ex. using std::string;, that avoids future problems, specially in big programs. An example of a common error is here. In general avoid using using namespace xxxx;.
Reorganizing your code, we have:
#include <iostream>
#include <stdio.h>
using std::string;
using std::cout;
using std::cerr;
using std::endl;
int main() {
static char ffprobeCmd[] =
"ffprobe " // command
"-v error " // args
"-count_frames "
"-of flat=s=_ "
"-select_streams v:0 "
"-show_entries stream=nb_read_frames "
"/home/peter/DA/videos/IMG-2014-1-10-10-4-37.avi" // file
" 2>&1"; // Send stderr to stdout
FILE *pipe = popen(ffprobeCmd, "r");
if (!pipe) {
perror("Cannot open pipe.");
return 1;
}
char* buffer = new char[512];
string result;
while ((fgets(buffer, sizeof(buffer), pipe)) != NULL) {
result += buffer;
}
// See Note below
int retCode = pclose(pipe);
if (retCode != 0) {
// Program ends with error, and result has the error message
cerr << "retCode: " << retCode << "\nMessage: " << result << endl;
return retCode;
} else {
// Program ends normally, prints: streams_stream_0_nb_read_frames="xxx"
cout << result << endl;
}
delete buffer; // free memory
return 0;
}
Note
pclose is not intended to return the executed program status code, but if you need this value, pclose does it in some c++ versions/systems, so check it. Anyway it will be zero only if everything was OK.