I'm trying to use ffmpeg to do some operations for me. It's really simple for now. I want to omit the ffmpeg output in my console, either redirecting them to strings or a .txt file that I can control. I'm on Windows 10.
I have tried _popen (with and "r" and "w") and system("ffmpeg command > output.txt")', with no success.
#include <iostream>
#include <stdio.h>
using namespace std;
#define BUFSIZE 256
int main()
{
/* 1.
x = system("ffmpeg -i video.mp4 -i audio.mp4 -c copy output.mp4 > output.txt");
*/
/* 2.
FILE* p;
p = _popen("ffmpeg -i video.mp4 -i audio.mp4 -c copy output.mp4", "w");
_pclose(p);
*/
/* 3.
char cmd[200] = { "ffmpeg -i video.mp4 -i audio.mp4 -c copy output.mp4" };
char buf[BUFSIZE];
FILE* fp;
if ((fp = _popen(cmd, "r")) == NULL) {
printf("Error opening pipe!\n");
return -1;
}
while (fgets(buf, BUFSIZE, fp) != NULL) {
// Do whatever you want here...
// printf("OUTPUT: %s", buf);
}
if (_pclose(fp)) {
printf("Command not found or exited with error status\n");
return -1;
}
*/
return 0;
}
Further in the development, I would like to know when the ffmpeg process finished (maybe I can monitor the ffmpeg return value?) or to display only the last line if the some error occurred.
I have made it to work.
In the solution 1, I added " 2>&1" to the end of the string.
Found it here: ffmpeg command line write output to a text file
output-to-a-text-file
Thanks!
I am a C++ programmer. I wanted to automate the task of compiling, running and debugging of a program into one neat PowerShell script. But it unexpectedly throws unrelated error which I don't know why.
The program takes C++ file(s) as input, produces a compiled .exe file and runs the program, all at once. It also takes other little debugging options.
if (!($args.count -ge 1)) {
Write-Host "Missing arguments: Provide the filename to compile"
exit
}
$isRun = 1
$EXE_NM = [IO.Path]::GetFileNameWithoutExtension($args[0])
$GPP_ARGS = "-o " + $EXE_NM + ".exe"
$count = 0
foreach ($op in $args) {
if ($op -eq "-help" -or $op -eq "-?") {
Write-Host "Format of the command is as follows:-"
Write-Host "cpr [filename.cpp] {additional files}"
Write-Host "{-add [compiler options] (all options of -add should be in double quotes altogether)}"
Write-Host "[-d (short for -add -g)] [-nr (do not run automatically)]"
exit
} elseif ($op.Contains(".cxx") -or $op.Contains(".cpp")) {
$op = """$op"""
$GPP_ARGS += " " + $op
} elseif ($op -eq "-add") {
if (($count+1) -ne $args.Count) {
$GPP_ARGS += " " + $args[$count+1]
}
} elseif ($op -eq "-d") {
$GPP_ARGS += " -g"
} elseif ($op -eq "-nr") {
$isRun = 0
}
$count += 1
}
$err = & g++.exe $GPP_ARGS 2>&1
if ($LastExitCode -eq 0) {
if ($isRun -eq 1) {
if ($isDebug -eq 1) {
gdb.exe $EXE_NM
} else {
iex $EXE_NM
}
}
if ($err.length -ne 0) {
Write-Host $err -ForegroundColor "Yellow"
}
} else {
Write-Host "$err" -ForegroundColor "Red"
}
For example: When I try to do cpr.ps1 HCF.cpp it throws the following error:
g++.exe: fatal error: no input files compilation terminated.
I have ensured that the .cpp file exists in the current working directory.
I second the recommendation of using make rather than writing your own build script. A simple re-usable Makefile isn't that difficult to write:
CXX = g++.exe
CPPFLAGS ?= -O2 -Wall
# get all files whose extension begins with c and is at least 2 characters long,
# i.e. foo.cc, foo.cpp, foo.cxx, ...
# NOTE: this also includes files like foo.class, etc.
SRC = $(wildcard *.c?*)
# pick the first file from the above list and change the extension to .exe
APP = $(basename $(word 1, $(SRC))).exe
$(APP): $(SRC)
$(CXX) $(CPPFLAGS) -o $# $<
.PHONY: run
run: $(APP)
#./$(APP)
.PHONY: debug
debug: $(APP)
#gdb $(APP)
.PHONY: clean
clean:
$(RM) $(APP)
make builds the program (if required). make run executes the program after building it. make debug runs the program in gdb. make clean deletes the program.
You can override the default CPPFLAGS by defining an environment variable:
$env:CPPFLAGS = '-g'
make
I am trying to use the CMake --find-package mode to verify in a one-liner if a usable version of boost exists.
This can generally be done via
cmake --find-package -DNAME=Boost -DCOMPILER_ID=GNU -DLANGUAGE=CXX -DMODE=EXIST
but I would like to know the if a minimal version was found.
For example in a CMakeList.txt this would read
find_package(Boost 1.60.0)
I tried to encode it in the -DNAME, searched the web for the problem and tried to guess a variable like -DMINVERSION but could not find a solution or documentation entry that describes further options.
A current work-around I am using looks like this
echo -e "#include <boost/version.hpp>\n#include <iostream>\n" \
"int main() { std::cout << BOOST_VERSION << std::endl; return 0; }" \
| g++ -I$BOOST_ROOT/include -x c++ - || { echo 0; }
BOOST_FOUND=$([ $(./a.out) -ge 106000 ] && { echo 0; } || { echo 1; })
I have the following code :
char command[256];
sprintf(command,"addr2line %p -e xcep_app", trace[i]);
addr2lineWriter = popen(command, "r");
if (addr2lineWriter == NULL)
flag = false;
if (flag == true) //execute parsing the output only if the command ran in the first place
{
while (fgets(addr2line_output, sizeof(addr2line_output)-1, addr2lineWriter) != NULL)
{
std::string addr2line_output_(addr2line_output);
complete_backtrace_.push_back(addr2line_output_);
}
pclose(addr2lineWriter);
}
Everything works fine, but I always get the following message :
sh: 1: Syntax error: word unexpected (expecting ")")
Why is it coming and how to stop it. also, what does it mean ?
I have the gnome terminal installed.
This is a guess based on the error message. trace[i] is NULL, so the command generated is:
addr2line (nil) -e xcep_app
And so the shell that is invoked to execute the command complains about the parentheses. #WumpusQ.Wumbley reports that ash will reproduce this message:
ash -c 'addr2line (nil) -e xcep_app'
Question about code performance: I'm trying to run ~25 regex rules against a ~20g text file. The script should output matches to text files; each regex rule generates its own file. See the pseudocode below:
regex_rules=~/Documents/rulesfiles/regexrulefile.txt
for tmp in *.unique20gbfile.suffix; do
while read line
# Each $line in the looped-through file contains a regex rule, e.g.,
# egrep -i '(^| )justin ?bieber|(^| )selena ?gomez'
# $rname is a unique rule name generated by a separate bash function
# exported to the current shell.
do
cmd="$line $tmp > ~/outputdir/$tmp.$rname.filter.piped &"
eval $cmd
done < $regex_rules
done
Couple thoughts:
Is there a way to loop the text file just once, evaluating all rules and splitting to individual files in one go? Would this be faster?
Is there a different tool I should be using for this job?
Thanks.
This is the reason grep has a -f option. Reduce your regexrulefile.txt to just the regexps, one per line, and run
egrep -f regexrulefile.txt the_big_file
This produces all the matches in a single output stream, but you can do your loop thing on it afterward to separate them out. Assuming the combined list of matches isn't huge, this will be a performance win.
I did something similar with lex. Of course, it runs every other day, so YMMV. It is very fast, even on several hundred megabyte files on a remote windows share. It takes only a few seconds to process. I don't know how comfortable you are hacking up a quick C program, but I've found this to be the fastest, easiest solution for large scale regex problems.
Parts redacted to protect the guilty:
/**************************************************
start of definitions section
***************************************************/
%{
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
#include <math.h>
#include <getopt.h>
#include <errno.h>
char inputName[256];
// static insert variables
//other variables
char tempString[256];
char myHolder[256];
char fileName[256];
char unknownFileName[256];
char stuffFileName[256];
char buffer[5];
/* we are using pointers to hold the file locations, and allow us to dynamically open and close new files */
/* also, it allows us to obfuscate which file we are writing to, otherwise this couldn't be done */
FILE *yyTemp;
FILE *yyUnknown;
FILE *yyStuff;
// flags for command line options
static int help_flag = 0;
%}
%option 8bit
%option nounput nomain noyywrap
%option warn
%%
/************************************************
start of rules section
*************************************************/
(\"A\",\"(1330|1005|1410|1170)\") {
strcat(myHolder, yytext);
yyTemp = &(*yyStuff);
} //stuff files
. { strcat(myHolder, yytext); }
\n {
if (&(*yyTemp) == &(*yyUnknown))
unknownCount += 1;
strcat(myHolder, yytext);
//print to file we are pointing at, whatever it is
fprintf(yyTemp, "%s", myHolder);
strcpy(myHolder, "");
yyTemp = &(*yyUnknown);
}
<<EOF>> {
strcat(myHolder, yytext);
fprintf(yyTemp, "%s", myHolder);
strcpy(myHolder, "");
yyTemp = &(*yyUnknown);
yyterminate();
}
%%
/****************************************************
start of code section
*****************************************************/
int main(int argc, char **argv);
int main (argc,argv)
int argc;
char **argv;
{
/****************************************************
The main method drives the program. It gets the filename from the
command line, and opens the initial files to write to. Then it calls the lexer.
After the lexer returns, the main method finishes out the report file,
closes all of the open files, and prints out to the command line to let the
user know it is finished.
****************************************************/
int c;
// the gnu getopt library is used to parse the command line for flags
// afterwards, the final option is assumed to be the input file
while (1) {
static struct option long_options[] = {
/* These options set a flag. */
{"help", no_argument, &help_flag, 1},
/* These options don't set a flag. We distinguish them by their indices. */
{0, 0, 0, 0}
};
/* getopt_long stores the option index here. */
int option_index = 0;
c = getopt_long (argc, argv, "h",
long_options, &option_index);
/* Detect the end of the options. */
if (c == -1)
break;
switch (c) {
case 0:
/* If this option set a flag, do nothing else now. */
if (long_options[option_index].flag != 0)
break;
printf ("option %s", long_options[option_index].name);
if (optarg)
printf (" with arg %s", optarg);
printf ("\n");
break;
case 'h':
help_flag = 1;
break;
case '?':
/* getopt_long already printed an error message. */
break;
default:
abort ();
}
}
if (help_flag == 1) {
printf("proper syntax is: yourProgram.exe [OPTIONS]... INFILE\n");
printf("splits csv file into multiple files")
printf("Option list: \n");
printf("--help print help to screen\n");
printf("\n");
return 0;
}
//get the filename off the command line and redirect it to input
//if there is no filename then use stdin
if (optind < argc) {
FILE *file;
file = fopen(argv[optind], "r");
if (!file) {
fprintf (stderr, "%s: Couldn't open file %s; %s\n", argv[0], argv[optind], strerror (errno));
exit(errno);
}
yyin = file;
strcpy(inputName, argv[optind]);
}
else {
printf("no input file set, using stdin. Press ctrl-c to quit");
yyin = stdin;
strcpy(inputName, "\b\b\b\b\bagainst stdin");
}
//set up initial file names
strcpy(fileName, inputName);
strncpy(unknownFileName, fileName, strlen(fileName)-4);
strncpy(stuffFileName, fileName, strlen(fileName)-4);
strcat(unknownFileName, "_UNKNOWN_1.csv");
strcat(stuffFileName, "_STUFF_1.csv");
//open files for writing
yyout = stdout;
yyTemp = malloc(sizeof(FILE));
yyUnknown = fopen(unknownFileName,"w");
yyTemp = &(*yyUnknown);
yyStuff = fopen(stuffFileName,"w");
yylex();
//close open files
fclose(yyUnknown);
printf("Lexer finished running %s",fileName);
return 0;
}
To build this flex program, have flex installed, and use this makefile (adjust the paths):
TARGET = project.exe
TESTBUILD = project
LEX = flex
LFLAGS = -Cf
CC = i586-mingw32msvc-gcc
CFLAGS = -O -Wall
INSTALLDIR = /mnt/J/Systems/executables
.PHONY: default all clean install uninstall cleanall
default: $(TARGET)
all: default install
OBJECTS = $(patsubst %.l, %.c, $(wildcard *.l))
%.c: %.l
$(LEX) $(LFLAGS) -o $# $<
.PRECIOUS: $(TARGET) $(OBJECTS)
$(TARGET): $(OBJECTS)
$(CC) $(OBJECTS) $(CFLAGS) -o $#
linux: $(OBJECTS)
gcc $(OBJECTS) $(CFLAGS) -lm -g -o $(TESTBUILD)
cleanall: clean uninstall
clean:
-rm -f *.c
-rm -f $(TARGET)
-rm -f $(TESTBUILD)
uninstall:
-rm -f $(INSTALLDIR)/$(TARGET)
install:
cp -f $(TARGET) $(INSTALLDIR)
A quick (!= too fast) Perl solution:
#!/usr/bin/perl
use strict; use warnings;
We preload regexes so that we read their files only once. They are stored in the array #regex. The regex file is the first file given as argument.
open REGEXES, '<', shift(#ARGV) or die;
my #regex = map {qr/$_/} <REGEXES>;
# use the following if the file still includes the egrep:
# my #regex = map {
# s/^egrep \s+ -i \s+ '? (.*?) '? \s* $/$1/x;
# qr{$_}
# } <REGEXES>;
close REGEXES or die;
We go through each remaining file that was given as argument:
while (#ARGV) {
my $filename = shift #ARGV;
We pre-open files for efficiency:
my #outfile = map {
open my $fh, '>', "outdir/$filename.$_.filter.piped"
or die "Couldn't open outfile for $filename, rule #$_";
$fh;
} (1 .. scalar(#rule));
open BIGFILE, '<', $filename or die;
We print all lines that match a rule to the specified file.
while (not eof BIGFILE) {
my $line = <BIGFILE>;
for $ruleNo (0..$#regex) {
print $outfile[$ruleNo] $line if $line =~ $regex[$ruleNo];
# if only the first match is interesting:
# if ($line =~ $regex[$ruleNo]) {
# print $outfile[$ruleNo] $line;
# last;
# }
}
}
Cleaning up before the next iteration:
foreach (#outfile) {
close $_ or die;
}
close BIGFILE or die;
}
print "Done";
Invocation: $ perl ultragrepper.pl regexFile bigFile1 bigFile2 bigFile3 etc. Anything quicker would have to be written directly in C. Your hard-disk data transfer speed is the limit.
This should run quicker as the bash pendant because I avoid re-opening files or reparsing regexes. Plus, no new processes have to be spawned for external tools. But we could spawn several threads! (at least NumOfProcessors * 2 threads may be sensible)
local $SIG{CHLD} = undef;
while (#ARGV) {
next if fork();
...;
last;
}
I also decided to come back here and write a perl version, before noticing that amon had already done it. Since it's already written, here's mine:
#!/usr/bin/perl -W
use strict;
# The search spec file consists of lines arranged in pairs like this:
# file1
# [Ff]oo
# file2
# [Bb]ar
# The first line of each pair is an output file. The second line is a perl
# regular expression. Every line of the input file is tested against all of
# the regular expressions, so an input line can end up in more than one
# output file if it matches more than one of them.
sub usage
{
die "Usage: $0 search_spec_file [inputfile...]\n";
}
#ARGV or usage();
my #spec;
my $specfile = shift();
open my $spec, '<', $specfile or die "$specfile: $!\n";
while(<$spec>) {
chomp;
my $outfile = $_;
my $regexp = <$spec>;
chomp $regexp;
defined($regexp) or die "$specfile: Invalid: Odd number of lines\n";
open my $out, '>', $outfile or die "$outfile: $!\n";
push #spec, [$out, qr/$regexp/];
}
close $spec;
while(<>) {
for my $spec (#spec) {
my ($out, $regexp) = #$spec;
print $out $_ if /$regexp/;
}
}
Reverse the structure: read the file in, then loop over the rules so you only perform matchs on individual lines.
regex_rules=~/Documents/rulesfiles/regexrulefile.txt
for tmp in *.unique20gbfile.suffix; do
while read line ; do
while read rule
# Each $line in the looped-through file contains a regex rule, e.g.,
# egrep -i '(^| )justin ?bieber|(^| )selena ?gomez'
# $rname is a unique rule name generated by a separate bash function
# exported to the current shell.
do
cmd=" echo $line | $rule >> ~/outputdir/$tmp.$rname.filter.piped &"
eval $cmd
done < $regex_rules
done < $tmp
done
At this point though you could/should use bash (or perl's) built-in regex matching rather than have it fire up a separate egrep process for each match. You might also be able to split the file
and run parallel processes. (Note I also corrected > to >>)