GIZA++ output missing *.ti.final and *actual.ti.final files - c++

I am having issues understanding the basics of how to run the GIZA++.
I went through the discussion, here on StackOverflow (Is there a tutorial about giza++?) and through the links people provided there. I have downloaded and compiled the latest giza from the Moses-SMT Github.
git clone https://github.com/moses-smt/giza-pp.git
cd giza-pp
make
After the successful compilation I have written a simple script for testing purposes.
#!/bin/bash
SRC=french
TRG=english
PREFIX=out
GIZA=../giza-pp
# Cleaning from previous run ...
rm -f *.log
rm -f *.vcb
rm -f *.snt
rm -f *.vcb.classes
rm -f *.vcb.classes.cats
rm -f *.gizacfg
rm -f *.cooc
rm -f ${PREFIX}*
# Converting plain text into sentence format using the "plain2snt.out" tool ...
${GIZA}/GIZA++-v2/plain2snt.out ${SRC} ${TRG}
# Generating word clusters using the "mkcls" tool ...
${GIZA}/mkcls-v2/mkcls -p${SRC} -V${SRC}.vcb.classes
${GIZA}/mkcls-v2/mkcls -p${TRG} -V${TRG}.vcb.classes
# Generating coocurrence using the "snt2cooc" tool ...
${GIZA}/GIZA++-v2/snt2cooc.out ${SRC}.vcb ${TRG}.vcb ${SRC}_${TRG}.snt > ${SRC}_${TRG}.cooc
# Running "GIZA++" ...
${GIZA}/GIZA++-v2/GIZA++ -S ${SRC}.vcb -T ${TRG}.vcb -C ${SRC}_${TRG}.snt -CoocurrenceFile ${SRC}_${TRG}.cooc -o ${PREFIX} >> giza.log 2>&1
Now this is the content of the directory right after I run the script.
jakub#jakub-virtual-machine:~/Master/giza-pp_test$ ls
english french_english.snt out.d3.final out.perp
english_french.snt french.vcb out.d4.final out.t3.final
english.vcb french.vcb.classes out.D4.final out.trn.src.vcb
english.vcb.classes french.vcb.classes.cats out.Decoder.config out.trn.trg.vcb
english.vcb.classes.cats giza.log out.gizacfg out.tst.src.vcb
french out.a3.final out.n3.final out.tst.trg.vcb
french_english.cooc out.A3.final out.p0_3.final run_test.sh
The point is that the output is missing (for me important) files listed below.
out.ti.final
out.actual.ti.final
Now I've been looking to the GIZA's Main.cpp (lines: 260 - 273) and can see the lines that should be creating these files.
cerr << "writing Final tables to Disk \n";
string t_inv_file = Prefix + ".ti.final" ;
if( !FEWDUMPS)
m1.getTTable().printProbTableInverse(t_inv_file.c_str(), m1.getEnglishVocabList(),
m1.getFrenchVocabList(),
m1.getETotalWCount(),
m1.getFTotalWCount());
t_inv_file = Prefix + ".actual.ti.final" ;
if( !FEWDUMPS )
m1.getTTable().printProbTableInverse(t_inv_file.c_str(),
eTrainVcbList.getVocabList(),
fTrainVcbList.getVocabList(),
m1.getETotalWCount(),
m1.getFTotalWCount(), true);
I am also having the "cerr" line printed in the log, but I just cannot find out why these files are not present within the output.
jakub#jakub-virtual-machine:~/Master/giza-pp_test$ cat giza.log | tail -n 25
p0_count is 4.0073 and p1 is 5.99635; p0 is 0.400584 p1: 0.599416
Model4: TRAIN CROSS-ENTROPY 0.80096 PERPLEXITY 1.74226
Model4: (10) TRAIN VITERBI CROSS-ENTROPY 0.801289 PERPLEXITY 1.74266
Dumping alignment table (a) to file:out.a3.final
Dumping distortion table (d) to file:out.d3.final
Dumping nTable to: out.n3.final
Model4 Viterbi Iteration : 10 took: 0 seconds
H3333344444 Training Finished at: Fri Oct 23 16:24:44 2015
Entire Viterbi H3333344444 Training took: 0 seconds
==========================================================
writing Final tables to Disk
Writing PERPLEXITY report to: out.perp
Writing source vocabulary list to : out.trn.src.vcb
Writing source vocabulary list to : out.trn.trg.vcb
Writing source vocabulary list to : out.tst.src.vcb
Writing source vocabulary list to : out.tst.trg.vcb
writing decoder configuration file to out.Decoder.config
Entire Training took: 0 seconds
Program Finished at: Fri Oct 23 16:24:44 2015
==========================================================
Did someone please run into a similar problem please? Is this some kind of bug or I am doing something wrong?
Edit:
Now I have recompiled the whole GIZA++ without the -DBINARY_SEARCH_FOR_TTABLE option within the CFLAGS in a Makefile. And changed the script so that it won't generate and provide the coocurrence file to the GIZA++. After I have re-ran the script the output did contain the out.actual.ti.final and out.ti.final. Does anybody know how to explain this behaviour? I taught that I would get a better allignment and probability estimates using the coocurrence file, is it of any need please? Or is it only for improving the speed of the performance?

I faced the same issue before .
I think the missing step is
In the Makefile located at .\giza-pp\GIZA++-v2\, substitute the line:
CFLAGS_OPT = $(CFLAGS) -O3 -funroll-loops -DNDEBUG -DWORDINDEX_WITH_4_BYTE -DBINARY_SEARCH_FOR_TTABLE -DWORDINDEX_WITH_4_BYTE
with the line:
CFLAGS_OPT = $(CFLAGS) -O3 -funroll-loops -DNDEBUG -DWORDINDEX_WITH_4_BYTE -DWORDINDEX_WITH_4_BYTE
Check this put and good luck

Related

How to run the code using AFL on terminal

I have some of github that I am trying to run using AFL.
The code: https://github.com/karimmd/CScanner/tree/cfe7d08bf46b1eed0443f9e27bc089d68a830a45
I wanna run the project and find vulnerablities. I have put the github all files inside a folder code , so the file structure is CScanner-master/code/all the files here.
I am using this command on terminal :
hemlatamahaur#Hemlatas-MacBook-Pro desktop % afl-fuzz -i CScanner-master -o code ./input-testcode.c
afl-fuzz 2.56b by <lcamtuf#google.com>
[+] You have 4 CPU cores and 2 runnable tasks (utilization: 50%).
[+] Try parallel jobs - see /usr/local/Cellar/afl-fuzz/2.57b/share/doc/afl/parallel_fuzzing.txt.
[*] Setting up output directories...
[+] Output directory exists but deemed OK to reuse.
[*] Deleting old session data...
[+] Output dir cleanup successful.
[*] Scanning 'CScanner-master'...
[+] No auto-generated dictionary tokens to reuse.
[*] Creating hard links for all input files...
[*] Validating target binary...
[-] PROGRAM ABORT : Program './input-testcode.c' not found or not executable
Location : check_binary(), afl-fuzz.c:6873
It keep saying there is no file as input-testcode.c
I am new to AFL, so I might be doing it wrong. How do I run this code using AFL to find the vulnerabilities. Any help is very appreciated.
you have to build your code using afl-clang first
afl-clang
$ afl-clang input-testcode.c -o input-testcode .
Then:
$ afl-fuzz -i CScanner-master -o code ./input-testcode .
I hope it works
Afl-fuzz works on the executable

sclite (SCTK) `make check` faliure, C++/perl/Cygwin, Safe to use Perl4 stuff?

I am currently trying to install NIST's sclite, which is part of SCTK 2.4.0 (github or newer version). I am attempting the install on Cygwin in bash. The installation is done using make.
I have gotten past the make configure and make all parts of the installation. This didn't come without some effort (See the SO posts on the first (file not recognized) and second (template/scoping) problems). When I get to the make check part of the install, a lot of the checks/tests pass, but then I get the following error.
Testing acomp.pl
No tests defined for acomp.pl
make[2]: Leaving directory '/cygdrive/c/David/programs/sctk2.4.0/sctk/src/acomp'
(cd def_art; make check)
make[2]: Entering directory '/cygdrive/c/David/programs/sctk2.4.0/sctk/src/def_art'
Testing def_art.pl
def_art.pl passed without tests
make[2]: Leaving directory '/cygdrive/c/David/programs/sctk2.4.0/sctk/src/def_art'
(cd hubscr; make check)
make[2]: Entering directory '/cygdrive/c/David/programs/sctk2.4.0/sctk/src/hubscr'
Testing hubscr.pl
./RunTests.pl
Running test 'test1-sastt', operation 'test', options '-G -f rttm -F rttm -a', directory 'test1-sastt.test'
Executing command
Error: unable to get the version for program def_art.pl with the command 'def_art.pl' at ../hubscr.pl line 419.
Error: Execution failed at ./RunTests.pl line 30.
make[2]: *** [makefile:20: check] Error 2
make[2]: Leaving directory '/cygdrive/c/David/programs/sctk2.4.0/sctk/src/hubscr'
make[1]: *** [makefile:68: checkFast] Error 2
make[1]: Leaving directory '/cygdrive/c/David/programs/sctk2.4.0/sctk/src'
make: *** [makefile:52: check] Error 2
I've done some research (described below), and I've been able to get past this problem. However, this involved including some outdated perl modules (Perl4).
My first question was how to fix this error or how to skip that part of the test. I've been able to fix the error, and if people think that it's safe, I'll put it as an answer. Note that there is one more problem with make check after this problem is fixed, but I mention how to get past that at the end.
I'm wondering if using the old Perl (Perl4::CoreLibs) is safe and/or good programming practice. Would it be better to change the source code to use Perl5 stuff?
Is there a better way altogether?
One thing I want to be sure of is that there are no critical tests further down the make check line which might fail.
System Details
$ uname -a
CYGWIN_NT-6.1 CAP-D-ENG-INT3 2.10.0(0.325/5/3) 2018-02-02 15:16 x86_64 Cygwin
$ bash --version
GNU bash, version 4.4.12(3)-release (x86_64-unknown-cygwin) ...
$ gcc --version
gcc (GCC) 6.4.0 ...
$ g++ --version
g++ (GCC) 6.4.0 ...
$ make --version
GNU Make 4.2.1
Built for x86_64-unknown-cygwin ...
$ systeminfo | sed -n 's/^OS\ *//p'
Name: Microsoft Windows 7 Enterprise
Version: 6.1.7601 Service Pack 1 Build 7601
Manufacturer: Microsoft Corporation
Configuration: Member Workstation
Build Type: Multiprocessor Free
My Attempts/Research
From the output above, we have def_art.pl passing the check because there are no checks - "def_art.pl passed without tests". However, the next thing checked, hubscr.pl, failed. The error comes from def_art.pl.
The obvious thing to do seemed to be to run def_art.pl, which I did.
$ ./src/def_art/def_art.pl
Can't locate getopts.pl in #INC
(#INC contains: /usr/local/lib/perl5/site_perl/5.26/x86_64-cygwin-threads /usr/local/share/perl5/site_perl/5.26 /usr/lib/perl5/vendor_perl/5.26/x86_64-cygwin-threads /usr/share/perl5/vendor_perl/5.26 /usr/lib/perl5/5.26/x86_64-cygwin-threads /usr/share/perl5/5.26)
at ./src/def_art/def_art.pl line 40.
So it seems to me that this is a deprecated perl file (or module, or whatever).
I dug a little further and found this discussion on a kaldi discussion from 2014. (kaldi is a speech-recognition toolkit that uses the SCTK scoring system). There are 3 sections of the discussion that I think are especially relevant, which I will link (first, second, third). I'll insert parts here:
def_art.pl is looking for getopts.pl which I coudn't find on my machine!
... [T]hese are legacy packages that are no longer supported in recent versions
of Perl 5. I don't think we should accept a dependency on them. They have
been deprecated since the beginning of Perl 5.
Instead of 'require "getopt.pl"', we should be doing
use Getopt::Std
(note: modern perl code should not call "require" for system packages).
There is a similar issue with "flush.pl" in the Perl scripts. I don't know
what the Perl 5 package name is.
... There are several places where this occurs.
I finally found that both getopts.pl and flush.pl are available from Perl4::CoreLibs. The URL that I use for wget was referenced at this site. Apparently, in other *NIX distros, the package manager can be used, e.g.
apt-get install libperl4-corelibs-perl
or
yum install perl-Perl4-CoreLibs
but I could not find an install via apt-cyg. I was able to install them from a tarball, as described in the What I'm Doing section.
One again, I'll state my main question: Is this safe/good programming practice? Is there a better solution?
If there is a better solution (using Perl 5), it seems that this link might lead the way to it.
Some other links that are possibly related: link_{n} and link{n+1} about flush.pl, link_{n+2} & link_{n+3} about getopts.pl and Perl4::CoreLibs.
What I'm Doing
$ mkdir perl_added
$ cd perl_added
$ wget http://search.cpan.org/CPAN/authors/id/Z/ZE/ZEFRAM/Perl4-CoreLibs-0.004.tar.gz
$ tar -xzf Perl4-CoreLibs-0.004.tar.gz
$ cd Perl4-CoreLibs-0.004
Rather than adding this directory's lib subdirectory to the PERLLIB environment variable with a one-time command-line, environment-variable-addition thing, I did the following.
Make a new directory in the /usr/lib directory, move the files there
$ stat /usr/lib/libperl4-corelibs-perl
stat: cannot stat '/usr/lib/libperl4-corelibs-perl': No such file or directory
# Checked that the directory didn't already exist. It didn't exist.
$ mkdir /usr/lib/libperl4-corelibs-perl
# Make each file executable, then move it into the new directory
# I'd like to come back and explain this.
$ find ./lib -type f -name "*.pl" -print0 | xargs -I'{}' -0 \
bash -c 'new_dir=/usr/lib/libperl4-corelibs-perl/; chmod +x {}; \
mv {} ${new_dir}'
Finally, I made it so that this directory will become part of the perl search path every time I use a terminal by adding the following line to my ~/.bashrc
This command adds the path to the PERLLIB environment variable. Different flavors of Linux have different syntax for adding to environment variables, make sure to find out what yours is!
export PERLLIB="/usr/bin/libperl4-corelibs-perl:$PERLLIB"
The commands I ran for this were
$ echo -e "\n\n## Allow Perl to use the files in Perl4::CoreLibs" >> $HOME/.bashrc
$ echo -e "export PERLLIB=\"/usr/lib/libperl4_corelibs_perl:$PERLLIB\"" >> $HOME/.bashrc
$ source $HOME/.bashrc
(Thanks to #melpomene for noting that the current version is 0.004, not 0.003.)
After that, I went back to the base folder of the install and ran make clean, make config, make all, and make check.
That did get me farther in the make check but not by far.
I'm wondering if using the old Perl (Perl4::CoreLibs) is safe and/or good programming practice. Would it be better to change the source code to use Perl5 stuff?
P.S. After all this, you probably want to go back and delete the folder where you untarred everything. In my case:
rm -rf /path/to/where/I/started/perl_added
The Result/Next Steps
A bunch of tests that passed and then
(cd hubscr; make check)
make[2]: Entering directory '/cygdrive/c/David/programs/sctk2.4.0/sctk/src/hubscr'
Testing hubscr.pl
./RunTests.pl
Running test 'test1-sastt', operation 'test', options '-G -f rttm -F rttm -a', directory 'test1-sastt.test'
Executing command
Unescaped left brace in regex is illegal here in regex; marked by <-- HERE in m/{_recursive_/_recur_{ <-- HERE _sive_/_si_ve_}_}/ at ../../md-eval/md-eval.pl line 1099, <DATA> line 12.
Error: MDEVAL failed
Command: md-eval.pl -nafcs -c 0.25 -o -r sastt-case1.ref.rttm.filt -s sastt-case1.sys.rttm.filt -M sastt-case1.sys.rttm.filt.mdeval.spkrmap 1> sastt-case1.sys.rttm.filt.mdeval at ../hubscr.pl line 679.
Error: Execution failed at ./RunTests.pl line 30.
make[2]: *** [makefile:20: check] Error 255
make[2]: Leaving directory '/cygdrive/c/David/programs/sctk2.4.0/sctk/src/hubscr'
make[1]: *** [makefile:68: checkFast] Error 2
make[1]: Leaving directory '/cygdrive/c/David/programs/sctk2.4.0/sctk/src'
make: *** [makefile:52: check] Error 2
Maybe this will be helpful. I will post a separate question for this issue or, if the solution is quick, I will add the solution on this post.
A Better Way
(Actually, a couple of better ways. See my comment under the question for the kaldi solution.)
In talking with colleagues and friends, it seems that there isn't anything unsafe about the Perl4 stuff. I did find a better way to get them "installed", but I'll leave the notes in the question showing the "long way" with the tarball, PERLPATH, etc.
Check that you have CPAN
$ which cpan
If you see something starting with which: no cpan in (...), you most likely don't have it. Try installing perl. For me, on Cygwin, I used
$ apt-cyg install perl
(Install apt-cyg if necessary, cf. here for instructions.)
You probably won't have to install Perl. You will likely see something like /usr/bin/cpan as the output of which cpan. If so, you're good. Enter cpan at the command prompt.
$ cpan
If it's your first time, it will ask a bunch of questions about the configuration. I just pressed "Enter" to accept the default each time, I finally got a prompt like this:
cpan shell -- CPAN exploration and modules installation (v2.18)
Enter 'h' for help.
cpan[1]>
There, I entered
cpan[1]> install Perl4::CoreLibs
The install will proceed. When it will have finished, you will be able to type exit and press "Enter", which will take you back to the bash command prompt.
cpan[2]> exit
Lockfile removed.
$
At this point, make check will still choke, but the install will complete successfully. If you want the make check to get all the way through, go to the "Getting past make check" section below. At this point, though, you can do the last two steps in the process.
$ make install
At this point I added the install path to my PATH variable. Hopefully, I'll be able to put in a link about that process. Here is a one-time solution.
$ export PATH=/path/to/sctk/bin:$PATH
Here is a lasting solution.
Now, for the last step in the installation process:
$ make doc
After the make doc, I made sure that the man pages were available. I looked on my machine until I found the place where other man files went. (Sorry, I don't have a systematic way of doing it, I just looked in a lot of places.) For me, on Cygwin, the directory was /usr/man/man1
I went into the doc directory
cd doc
and copied all of the files into the directory I had found
cp -r ./* /usr/man/man1/
Note that there are also now html and htm files in the directory that also provide documentation.
Getting past `make check`
So, you really want to see it go through without errors. You need to change the following file: src/hubscr/RunTests.pl
Originally it has the following beginning, which I have used the head command to show.
$ head -n 15 src/hubscr/RunTests.pl
#!/usr/bin/perl -w
use strict;
my $operation = (defined($ARGV[0]) ? $ARGV[0] : "test");
sub runIt{
my ($op, $testId, $options, $glm, $hub, $lang, $ref, $systems) = #_;
my $baseDir = $testId.".base";
my $outDir = $testId.($op eq "setTests" ? ".base" : ".test");
print " Running test '$testId', operation '$op', options '$options',
directory '$outDir'\n";
system ("mkdir -p $outDir");
system ("rm -fr $outDir/test* $outDir/lvc*");
### Copy files
foreach my $file($glm, $ref, split(/\s+/,$systems)){
system("cp $file $outDir");
Change it so that, after the print command, you have new lines as follows. I again use the head command to show the beginning of the file
$ head -n 63 src/hubscr/RunTests.pl
#!/usr/bin/perl -w
use strict;
my $operation = (defined($ARGV[0]) ? $ARGV[0] : "test");
sub runIt{
my ($op, $testId, $options, $glm, $hub, $lang, $ref, $systems) = #_;
my $baseDir = $testId.".base";
my $outDir = $testId.($op eq "setTests" ? ".base" : ".test");
print " Running test '$testId', operation '$op', options '$options', directory '$outDir'\n";
####DWB, 2018-05-21 Getting `make check` to work####
if ( $testId eq "test1-sastt" &&
$operation eq "test" &&
$options eq "-G -f rttm -F rttm -a" &&
$outDir eq "test1-sastt.test" ) # <problem 1>
{
print "\n";
print "\n#### SKIPPING ####";
print "\nJust kidding. That breaks the make.";
print "\nIt said: \n\n";
print "\nUnescaped left brace in regex is illegal here in regex; marked by <-- HERE in m/{_recursive_/_recur_{ <-- HERE _sive_/_si_ve_}_}/ at ../../md-eval/md-eval.pl line 1099, <DATA> line 12.";
print "\nrror: MDEVAL failed";
print "\nCommand: md-eval.pl -nafcs -c 0.25 -o -r sastt-case1.ref.rttm.filt -s sastt-case1.sys.rttm.filt -M sastt-case1.sys.rttm.filt.mdeval.spkrmap 1> sastt-case1.sys.rttm.filt.mdeval at ../hubscr.pl line 679.";
print "\nError: Execution failed at ./RunTests.pl line 30.\n\n";
print "\n"
print "\nThat's a perl legacy problem, see:"
print "\n[https://unix.stackexchange.com/a/375505/291375][1]"
print "\nI'm outta here.";
print "\n Sincerely, bballdave025";
print "\n";
print "\n";
return;
}#endof: if (<problem 1>)
if ( $testId eq "test2-sastt" &&
$operation eq "test" &&
$options eq "-G -f rttm -F rttm -a" &&
$outDir eq "test2-sastt.test" ) # <problem 2>
{
print "\n";
print "\n#### SKIPPING ####";
print "\nJust kidding. That breaks the make.";
print "\nIt said: \n\n";
print "\nError: Test test2-sastt has failed. Diff output is :";
print "\ndiff -i -x CVS -x .DS_Store -x log -x '*lur' -I '[cC]reation[ _]date' -I md-eval -r test2-sastt.test/sastt-case2.sys.rttm.filt.alignments/segmentgroup-116.html test2-sastt.base/sastt-case2.sys.rttm.filt.alignments/segmentgroup-116.html";
print "\n 45c45";
print "\n < jg.drawStringRect(\"SUB48\",0, 47, scale*656, \"left\");";
print "\n ---";
print "\n#### and a whole bunch of other draw stuff! ####";
print "\n1 at ./RunTests.pl line 61.\n\n";
print "\n"
print "\nThat looks like Java drawing code, and I don't"
print "\neven want to mess with it!"
print "\nI'm outta here.";
print "\n Sincerely, bballdave025";
print "\n";
print "\n";
return;
}#endof: if (<problem 2>)
system ("mkdir -p $outDir")
Now you should be able to get through. Try it:
make check

proper syntax for splitting large mp3 files into several

I can split one large mp3 file into several files based on silence using the mp3split command / program below
mp3splt -f -t 4.0 -a -d split audio_file.mp3
and I get
split/audio_file_000m_00s_005m_00s.mp3
but how can I get
split/000m_00s_005m_00s_audio_file.mp3
or increment by one in the front
split/000_audio_file_000m_00s_005m_00s.mp3
split/001_audio_file_005m_00s_010m_00s.mp3
I looked at the syntax http://wiki.librivox.org/index.php/How_To_Split_With_Mp3Splt but couldn't figure out what needs to change in my syntax.
I'm using ubuntu 16.04 64bit linux
You need to set the -o (output format) option.
Try something like:
mp3splt -o #N3_#f -f -t 4.0 -a -d split audio_file.mp3
Giving you:
001_audio_file.mp3,
002_audio_file.mp3,
003_audio_file.mp3…
The man page is a little messy, but it's all there.
I used
mp3splt -o #N3_#mm_#ss_#f -f -t 4.0 -a -d split audio_file.mp3
which gives me
/split/001_000m_00s_audio_file.mp3
/split/002_004m_00s_audio_file.mp3

Calculate makefile build time

Any creative idea how to use enviroment variables to calculate full build time of a makefile? Can I store the time somewhere and compare it in the end?
I know the OP meant Windows OS, but for the record, on any Linux machine you can use time:
>time make -j
real 0m9.774s
user 0m31.562s
sys 0m1.092s
On Windows 7 you've got PowerShell. So we'll assume:
Your makefile Makefile is in directory C:\Dev\ProjDir
Your make tool is mingw32-make and is in your PATH
If PowerShell isn't your usual console then do:
C:\Dev\ProjDir>powershell
At the PowerShell prompt, run:
PS C:\Dev\ProjDir> Measure-Command { mingw32-make | Out-Default }
This will run your build and display the output, followed by
a timing report, e.g.
Days : 0
Hours : 0
Minutes : 0
Seconds : 0
Milliseconds : 244
Ticks : 2443841
TotalDays : 2.82851967592593E-06
TotalHours : 6.78844722222222E-05
TotalMinutes : 0.00407306833333333
TotalSeconds : 0.2443841
TotalMilliseconds : 244.3841
Continued for OP's follow-up:
I was wondering if it is possible to be implemented as part of the makefile,
so I can decide which part to measure
The idea of timing "part of a makefile" does not make clear sense. When you run make, it parses
the entire makefile before it makes anything and works out the entire sequence of
steps it must take to make the specified or default target(s). Then, it takes all those
steps.
Perhaps you want to measure the time taken to make a particular set of targets? You
can do that in the way already described. For example, suppose that your makefile
can make two libraries libfoo.lib and libbar.lib and you would like to time the
build of just libfoo.lib. To make libfoo.lib by itself, the make command you would
run is:
mingw32-make libfoo.lib
So to time this command, run:
PS C:\Dev\ProjDir> Measure-Command { mingw32-make libfoo.lib | Out-Default }
Or suppose that your makefile makes app.exe from source files foo.c and bar.c, and
you would like to time just the build of the object files foo.obj, bar.obj. The
make command you would run to build just those object files is:
mingw32-make foo.obj bar.obj
So you would time it with:
PS C:\Dev\ProjDir> Measure-Command { mingw32-make foo.obj bar.obj | Out-Default }
Perhaps you would like to be able to invoke powershell's Measure-Command inside your makefile
to time the building of particular targets?
For this, you need a command that invokes PowerShell to run some other command. That is:
powershell -c "some other command"
So in your makefile you can add a target for timing the build of any other targets:
.phony: time
time:
powershell -c "Measure-Command { $(MAKE) $(targets) | Out-Default }"
You would use the time target like so:
C:\Dev\ProjDir>mingw32-make time targets=app.exe
or:
C:\Dev\ProjDir>mingw32-make time targets="foo.obj bar.obj"
And of course, in your makefile, the commands to build a particular target can
include powershell -c "some other command" wherever you like.

beginner question on investigating on samples in Weka

I've just used Weka to train my SVM classifier under "Classify" tag.
Now I want to further investigate which data samples are mis-classified,I need to study their pattern,but I don't know where to look at this from Weka.
Could anyone give me some help please?
Thanks in advance.
You can enable the option from:
You will get the following instance predictions:
=== Predictions on test split ===
inst# actual predicted error prediction
1 2:Iris-ver 2:Iris-ver 0.667
...
16 3:Iris-vir 2:Iris-ver + 0.667
EDIT
As I explained in the comments, you can use the StratifiedRemoveFolds filter to manually split the data and create the 10-folds of the cross-validation.
This Primer from the Weka wiki has some examples of how to invoke Weka from the command line. Here's a sample bash script:
#!/bin/bash
# I assume weka.jar is on the CLASSPATH
# 10-folds CV
for f in $(seq 1 10); do
echo -n "."
# create train/test set for fold=f
java weka.filters.supervised.instance.StratifiedRemoveFolds -i iris.arff \
-o iris-f$f-train.arff -c last -N 10 -F $f -V
java weka.filters.supervised.instance.StratifiedRemoveFolds -i iris.arff \
-o iris-f$f-test.arff -c last -N 10 -F $f
# classify using SVM and store predictions of test set
java weka.classifiers.functions.SMO -C 1.0 \
-K "weka.classifiers.functions.supportVector.RBFKernel -G 0.01" \
-t iris-f$f-train.arff -T iris-f$f-test.arff \
-p 0 > f$f-pred.txt
#-i > f$f-perf.txt
done
echo
For each fold, this will create two datasets (train/test) and store the predictions in a text file as well. That way you can match each index with the actual instance in the test set.
Of course the same can be done in the GUI if you prefer (only a bit more tedious!)