Executing bash shell command and extract output --> invalid file error - c++

I want to extract the framesize of a video from a file. For this purpose, I have launched an ffmpeg command via bash shell, and I want to extract the output. This command is working well in the bash shell, and returns the output as wanted.
ffprobe -v error -count_frames -of flat=s=_ -select_streams v:0 -show_entries stream=nb_read_frames /home/peter/DA/videos/IMG-2014-1-10-10-4-37.avi
I want to call it via C++ and read out the result. I use the IDE Qt 4.8.6 with GCC 4.8 compiler.
For my code, I use this template:
executing shell command with popen
and changed it for my demands to
#include <iostream>
#include <string>
#include <stdio.h>
using namespace std;
int main()
{
FILE* pipe = popen("echo $(ffprobe -v error -count_frames -of flat=s=_ -select_streams v:0 -show_entries stream=nb_read_frames /home/peter/DA/videos/IMG-2014-1-10-10-4-37.avi)", "r");
if(!pipe)
{
cout << "error" << endl;
return 1;
}
char* buffer = new char[512];
string result;
fgets(buffer, sizeof(buffer), pipe) ;
while(!feof(pipe))
{
if(fgets(buffer, sizeof(buffer), pipe) != NULL)
{
cout << buffer << endl;
result += buffer;
}
}
pclose(pipe);
cout << result<< endl;
return 0;
}
The Qt console returned me this warning, and it is rending with return 0:
/home/peter/DA/videos/IMG-2014-1-10-10-4-37.avi: Invalid data found when processing input
and "pipe" is empty.
When I compile the main.cpp file above with g++ in the shell it works nice too.

Old post, but as I see, there are two points here:
Error "Invalid data found when processing input"
That's an ffprobe normal file processing error. Usually it happens when there are errors inside media file, it is not related to c++ program.
ffprobe writes warning/error messages into stderr stream, but popen only captures stdout stream, that's why your program couldn't get that error message trough the pipe.
How get the stdout+stderr in my program
popen allows execute any shell command, so we can use it to redirect stderr into stdout, so your program can get that output too, like this:
FILE *pipe = popen("ffprobe ... 2>&1");
The 2> redirect handle#2 output into current &1 handle#1 output (#1=stdout, #2=stderr).
There's absolute no need to execute FILE *pipe = popen("echo $(ffprobe ...)");, because the final result will be the same: Note that $(...) returns a string with stdout command output, and echo prints it. Totally redundant.
A few observations in order to improve your code:
When a string is too big to be displayed in one screen width, it's better split it into multiple lines (maybe grouping text inside each line within some logic), because that will improve the reading of your code by other people (and eventually by yourself in a few months).
You can do this with a C/C++ compiler feature that concatenates strings separated by spaces (newlines, tab, etc.), ex. "hi " "world" is the same as "hi world" to the compiler.
When your program have to write error messages, use the stderr stream. In c++ that's std::cerr instead std::cout.
Always free memory allocated when it's no loger used (each new has to have a delete)
Avoid use using namespace std;, instead use using std::name; for each standard instance/class that you'll use. Ex. using std::string;, that avoids future problems, specially in big programs. An example of a common error is here. In general avoid using using namespace xxxx;.
Reorganizing your code, we have:
#include <iostream>
#include <stdio.h>
using std::string;
using std::cout;
using std::cerr;
using std::endl;
int main() {
static char ffprobeCmd[] =
"ffprobe " // command
"-v error " // args
"-count_frames "
"-of flat=s=_ "
"-select_streams v:0 "
"-show_entries stream=nb_read_frames "
"/home/peter/DA/videos/IMG-2014-1-10-10-4-37.avi" // file
" 2>&1"; // Send stderr to stdout
FILE *pipe = popen(ffprobeCmd, "r");
if (!pipe) {
perror("Cannot open pipe.");
return 1;
}
char* buffer = new char[512];
string result;
while ((fgets(buffer, sizeof(buffer), pipe)) != NULL) {
result += buffer;
}
// See Note below
int retCode = pclose(pipe);
if (retCode != 0) {
// Program ends with error, and result has the error message
cerr << "retCode: " << retCode << "\nMessage: " << result << endl;
return retCode;
} else {
// Program ends normally, prints: streams_stream_0_nb_read_frames="xxx"
cout << result << endl;
}
delete buffer; // free memory
return 0;
}
Note
pclose is not intended to return the executed program status code, but if you need this value, pclose does it in some c++ versions/systems, so check it. Anyway it will be zero only if everything was OK.

Related

How to redirect program output as its input

I've written a simple C++ program for tutorial purposes.
My goal is to loop it infinitely.
#include <iostream>
#include <string>
int main()
{
std::cout << "text";
for(;;) {
std::string string_object{};
std::getline(std::cin, string_object);
std::cout << string_object;
}
return 0;
}
After compilation I run it like this:
./bin 0>&1
What I expected to happen is that the "text" that is output to stdout, and it will now become also stdin for the program and it will loop forever. Why doesn't it happen?
First, you need to output newlines when printing to std::cout, otherwise std::getline() won't have any complete line to read.
Improved version:
#include <iostream>
#include <string>
int main()
{
std::cout << "stars" << std::endl;
for(;;) {
std::string string_object;
std::getline(std::cin, string_object);
std::cout << string_object << std::endl;
}
return 0;
}
Now try this:
./bin >file <file
you don't see any output, because it's going to the file. But if you stop the program and look at the file, behold, it's full of
stars
stars
stars
stars
:-)
Also, the reason that the feedback loop cannot start when you try
./bin 0>&1
is, that you end up with both stdin and stdout connected to /dev/tty
(meaning that you can see the output).
But a TTY device cannot ever close the loop, because it actually consists of two separate channels, one passing the output to the terminal, one passing the terminal input to the process.
If you use a regular file for in- and output, the loop can be closed. Every byte written to the file will be read from it as well, if the stdin of the process is connected to it. That's as long as no other process reads from the file simultaneously, because each byte in a stream can be only read once.
Since you're using gcc, I'm going to assume you have pipe available.
#include <cstring>
#include <iostream>
#include <unistd.h>
int main() {
char buffer[1024];
std::strcpy(buffer, "test");
int fd[2];
::pipe(fd);
::dup2(fd[1], STDOUT_FILENO);
::close(fd[1]);
::dup2(fd[0], STDIN_FILENO);
::close(fd[0]);
::write(STDOUT_FILENO, buffer, 4);
while(true) {
auto const read_bytes = ::read(STDIN_FILENO, buffer, 1024);
::write(STDOUT_FILENO, buffer, read_bytes);
#if 0
std::cerr.write(buffer, read_bytes);
std::cerr << "\n\tGot " << read_bytes << " bytes" << std::endl;
#endif
sleep(2);
}
return 0;
}
The #if 0 section can be enabled to get debugging. I couldn't get it to work with std::cout and std::cin directly, but somebody who knows more about the low-level stream code could probably tweak this.
Debug output:
$ ./io_loop
test
Got 4 bytes
test
Got 4 bytes
test
Got 4 bytes
test
Got 4 bytes
^C
Because the stdout and stdin don't create a loop. They may point to the same tty, but a tty is actually two separate channels, one for input and one for output, and they don't loop back into one another.
You can try creating a loop by running your program with its stdin connected to the read end of a pipe, and with its stdout to its write end. That will work with cat:
mkfifo fifo
{ echo text; strace cat; } <>fifo >fifo
...
read(0, "text\n", 131072) = 5
write(1, "text\n", 5) = 5
read(0, "text\n", 131072) = 5
write(1, "text\n", 5) = 5
...
But not with your program. That's because your program is trying to read lines, but its writes are not terminated by a newline. Fixing that and also printing the read line to stderr (so we don't have to use strace to demonstrate that anything happens in your program), we get:
#include <iostream>
#include <string>
int main()
{
std::cout << "text" << std::endl;
for(;;) {
std::string string_object{};
std::getline(std::cin, string_object);
std::cerr << string_object << std::endl;
std::cout << string_object << std::endl;
}
}
g++ foo.cc -o foo
mkfifo fifo; ./foo <>fifo >fifo
text
text
text
...
Note: the <>fifo way of opening a named pipe (fifo) was used in order to open both its read and its write end at once and so avoid blocking. Instead of reopening the fifo from its path, the stdout could simply be dup'ed from the stdin (prog <>fifo >&0) or the fifo could be first opened as a different file descriptor, and then the stdin and stdout could be opened without blocking, the first in read-only mode and the second in write-only mode (prog 3<>fifo <fifo >fifo 3>&-).
They will all work the same with the example at hand. On Linux, :|prog >/dev/fd/0 (and echo text | strace cat >/dev/fd/0) would also work -- without having to create a named pipe with mkfifo.

Redirecting output to file then back to console in C++

The task is to read input from input.txt and write the output to output.txt.
However on completion of the above tasks, further instructions/output should now be displayed to the console.
Came to know about freopen() in c++ which works fine for the first half of the given task. But unfortunately, I have no idea how to redirect the output back to the console again.
void writeIntoFile(){
freopen("input.txt","r",stdin); // Task 1. Reading from input.txt file
freopen("output.txt","w",stdout); // Task 2. Writing to output.txt file
printf("This sentence is redirected to a file.");
fclose(stdout);
printf("This sentence is redirected to console"); // Task 3. Write further output to console
}
What I expected from fclose() was that it would end up writing into the text file and would hence further write the output into the console, but it doesn't. How can I achieve task 3 as well.
Probably what you are looking for is rdbuf() as mentioned by doomista in the comments.
Here is a way to redirect Output.
#include <iostream>
#include <fstream>
int main()
{
/** backup cout buffer and redirect to out.txt **/
std::ofstream out("out.txt");
auto *coutbuf = std::cout.rdbuf();
std::cout.rdbuf(out.rdbuf());
std::cout << "This will be redirected to file out.txt" << std::endl;
/** reset cout buffer **/
std::cout.rdbuf(coutbuf);
std::cout << "This will be printed on console" << std::endl;
return 0;
}

Always output to screen and allow redirection

I'm writing a small CLI application and I want to allow the user to redirect to a file while standard cout statements go to the output.txt file I want progress to always to go the screen.
./myApp > output.txt
10% complete
...
90% complete
Completed
Is this possible? How can I do it?
Thanks in advance!!
This will work even if both stdin and stdout have been redirected:
spectras#etherbee:~$ ./term
hello terminal!
spectras#etherbee:~$ ./term >/dev/null 2>&1
hello terminal!
The idea is to open the controlling terminal of the process directly, bypassing any redirection, like this:
#include <errno.h>
#include <fcntl.h>
#include <unistd.h>
int main()
{
int fd = open("/dev/tty", O_WRONLY);
if (fd < 0 && errno != ENODEV) {
/* something went wrong */
return 1;
}
int hasTTY = (fd >= 0);
if (hasTTY) {
write(fd, "hello terminal!\n", 16);
}
return 0;
}
From man 4 tty:
The file /dev/tty is a character file with major number 5 and
minor number 0, usually of mode 0666 and owner.group root.tty. It is
a synonym for the controlling terminal of a process, if any.
If you're using C++, you might want to wrap the file descriptor into a custom streambuf, so you can use regular stream API on it. Alternately, some implementations of the C++ library offer extensions for that purpose. See here.
Or, if you don't care about getting the error code reliably, you could just std::ofstream terminal("/dev/tty").
Also as a design consideration if you do this, offering a quiet option to let the user turn off the writing to the terminal is a good idea.
Your process cannot know if the shell redirects the standard console output (std::cout) or not.
So you'll need another handle that lets you output to the terminal independently of that redirection.
As #Mark mentioned in their comment you could (ab-)use1 std::cerr to do that, along with some ASCII trickery to overwrite the current output line at the terminal (look at backspace characters: '\b').
1)Not to mention the mess printed at the terminal if the output isn't actually redirected.
You can write your progress indicators to the stderr stream. They will appear on the console if the user redirects stdout to a file.
For example:
fprintf(stderr, "10%% complete\n");
I figured out how to do it, even if the user redirects stderr. The following code gets the name of the current terminal and checks to see if our output is being redirected. It also has a my_write() function that allows you to write to both the terminal and the redirect file, if they've redirected stdout. You can use the my_write() function with the writetoterm variable where-ever you want to write something that you want to always be written to the terminal. The extern "C" has to be there, otherwise (on Debian 9 with GCC 6.3, anyway) the ttyname() function will just return NULL all the time.
#include <stdio.h>
#include <stdlib.h>
#include <unistd.h>
#include <iostream>
#include <string>
#include <sys/types.h>
#include <fcntl.h>
#include <sys/stat.h>
#include <string.h>
#include <error.h>
#include <errno.h>
#include <sstream>
using std::string;
using std::fstream;
using std::cout;
using std::endl;
using std::cerr;
using std::stringstream;
void my_write(bool writetoterm, int termfd, string data)
{
if(writetoterm)
{
int result = write(termfd, data.c_str(), data.length());
if(result < data.length()){
cerr << "Error writing data to tty" << endl;
}
}
cout << data;
}
extern "C" {
char* GetTTY(int fd){
//printf("%s", ttyname(fd));
return ttyname(fd);
}
}
int main(int argc, char** argv){
getenv("TTY");
bool writetoterm = false;
struct stat sb = {};
if(!GetTTY(STDOUT_FILENO)){
//not a TTY
writetoterm = true;
}
int ttyfd = open(GetTTY(2), O_WRONLY);
if(ttyfd < 0){
//error in opening
cout << strerror(errno) << endl;
}
string data = "Hello, world!\n";
my_write(true, ttyfd, data);
int num_for_cout = 42;
stringstream ss;
ss << "If you need to use cout to send something that's not a string" << endl;
ss << "Do this: " << num_for_cout << endl;
my_write(writetoterm, ttyfd, ss.str().c_str());
return 0;
}
I found the official std:: method of handling this. There is another type... std::clog. This is specifically for information and always appears on the command line even though the user redirects the output of the program myProgram > out.txt.
Thanks this was great to see all the methods that this can be done.

C++ executing a code from a text file

The title says it all. Like a compiler but not the same. I want a certain part of the code to be executed from an outer source in C++. And I don't know if that is even possible as it is in Javascript, Python etc.
Let's say I got a text file titled sample.txt like this:
int a = 10, c;
cin >> c;
cout << a + c;
And I'll call the text file in main() function and compiled the .exe file. When I run it, is it possible for it behave dynamically as if the code from the file is embedded in there and write 10 + input? The source file should be changed and different results will apply next time it runs. It's just an example piece of code by the way, I might want to run some for/while loops and if conditions etc.
PS: I know how to read from file and assign it as an integer or write it on the screen, that's not what I want. I need functions to be compiled through. Thanks.
There are a couple of approaches to this.
Fork a command-line compiler and the fork to run the result, passing input and output through STDIN/STDOUT
(While i realize you asked for this from a C++ calling program, i'm demonstrating the idea in bash for simplicity.)
File: run_it.sh:
#!/bin/bash
set -e # Bail on the first error
FN_IN=./sample.txt
FN_WRAPPED=./wrapped.cc
FN_EXE=./wrapped
CC=g++
# Wrap the fragment in a full C++ program and compile it.
function build_wrapper () {
cat > $FN_WRAPPED <<'EOF'
#include <iostream>
using namespace std;
int main(int argc, char **argv) {
EOF
cat $FN_IN >> $FN_WRAPPED
cat >> $FN_WRAPPED <<'EOF'
return 0;
}
EOF
$CC -o $FN_EXE $FN_WRAPPED
}
# Run the wrapper, passing input through STDIN and reading the output from STDOUT.
function run () {
local IN=$1
echo $IN | $FN_EXE
}
# Remove the wrapper (both code and compiled).
function clean_up () {
rm -f $FN_WRAPPED $FN_EXE
}
build_wrapper
IN=24
OUT=$(echo "$IN" | $FN_EXE)
echo "Result = $OUT"
echo "Another result = $(run 16)"
clean_up
$ ./run_it.sh
Result = 34
Another result = 26
Use something like LLVM to compile the function in-process and call it
This approach is very powerful, but also somewhat involved.
In a nutshell, you'd want to
Read sample.txt into memory.
Compile it to a function.
Call the function.
Some possibly-helpful links:
http://fdiv.net/2012/08/15/compiling-code-clang-api
http://www.ibm.com/developerworks/library/os-createcompilerllvm1/
http://llvm.org/docs/tutorial/
https://msm.runhello.com/p/1003
https://db.in.tum.de/teaching/ss15/moderndbs/resources/7/fibonacci.cpp
A compiled C++ program contains only the machine instructions necessary to execute the compiled source code, the language standard does not specify any mechanism for the user to produce additional machine instructions at run time.
In order to provide a scripting capability - the ability to generate program flow in response to the parsing of input text - you have to provide a parser and an execution engine.
int main()
{
std::string cmd;
int op1, op2;
while (cin >> cmd >> op1 >> op2) {
if (cmd == "add")
std::cout << op1 + op2 << "\n";
else if (cmd == "sub")
std::cout << op1 - op2 << "\n";
else
std::cerr << "error\n";
}
}
Many interpreted languages are written in C or C++ in the first place, so it is often possible to build them as a library which you can then incorporate into an application so that the program can call invoke them to provide an embedded scripting language. Common examples of such languages are Lua, Python and JavaScript. Your program can then pass code to be executed to the interpreter.
Writing your own lua interpreter could look like this:
#include <iostream>
#include <string>
#include <lua.h>
#include <lauxlib.h>
#include <lualib.h>
bool get_input(std::string& in)
{
bool result;
do {
std::cout << "lua> " << std::flush;
result = std::getline(std::cin, in);
} while (result && in.empty());
return result;
}
int main (void) {
lua_State *L = lua_open(); // open lua
luaL_openlibs(L); // open standard libraries
std::string in;
while (get_input(in)) {
if (in.empty())
continue;
int error = luaL_loadbuffer(L, in.c_str(), in.size(), "line") ||
lua_pcall(L, 0, 0, 0);
if (error) {
std::cerr << lua_tostring(L, -1) << '\n';
lua_pop(L, 1); // remove the error message from the stack
}
}
lua_close(L);
}
Under linux:
$ g++ -Wall -O3 -o mylua mylua.cpp -I/usr/include/lua5.1 -llua
$ ./mylua
lua> print("hello")
hello
lua> error
[string "line"]:1: '=' expected near '<eof>'
lua>

c++ stream input process

I want convert text output from one program to Gui output and first program produce line output every microsecond. If I send with pipe command in linux, second program how receive line by line and process that? in another word I have main function in C++ that this parameters is stream and unlimited.
Thank you
Program1:
#include <iostream>
int main()
{
std::cout << "Program1 says Hello world!" << std::endl; // output to standard out using std::cout
}
Program2:
#include <iostream>
#include <string>
int main()
{
std::string line;
while (std::getline(std::cin, line)) // Read from standard in line by line using std::cin and std::getline
{
std::cout << "Line received! Contents: " << line << std::endl;
}
}
Now if you run Program1 and pipe into Program2, you should get:
$ ./Program1 | ./Program2
Line Recieved! Contents: Program1 says Hello world!
Note that Program2 will keep reading from standard input until EOF is reached (or some error occurs).
For your case, Program1 is the thing that generates the output, and Program2 is the GUI program that consumes the output. Note that, in order for the program to be non-blocking, you'll probably want to do your reading from standard input on a separate thread.
As for your constraint of "receiving input every microsecond" that may not be realistic... It will process as fast as it can, but you can't rely on standard in/out to be that fast, especially if you're sending a decent amount of data.