weka: how to generate libsvm training parameter - weka

I am running libsvm through weka. Its output accuracy looks good to me, so I am planning to write a svm model by myself. However, weka didn't generate any training parameter, such as number of support vector. Therefore i cannot do anything. Searching the web, i found somebody said it would generate some parameters like the following:
optimization finished, #iter = 27
nu = 0.058475864943863545
obj = -1.871013102744184, rho = -0.19357337828800944
nSV = 9, nBSV = 0 `enter code here`
Total nSV = 9
but how come i didn't see any of them? any step that i missed? please help me. Thanks a lot.

Weka writes the output you mentioned to stderr.
So if you have started weka.sh or weka.bat from a terminal (or "command window" if you are on Windows), you should see that output appear in your terminal window after clicking "classify"
If you want to have access to this information via scripts, you can
redirect the output to a file and read in that file.
Here is how to edit the startup file weka.sh / weka.bat.
Edit this line (it is probably the last line) in order to write log info to a file instead of the terminal window:
java -cp $CP -Xmx8092m weka.gui.GUIChooser 2>>/opt/weka-stable/weka.log &
You can also add a properties file to your home directory to add more fine-grained behaviour.
https://weka.wikispaces.com/Properties+file
(You probably can also access information via the Weka Java API somehow, but you did not ask for that)

Related

Plot values in .m file from C++

I have looked extensively in the net, yet not found exactly what I want.
I have a big simulation program that outputs results in a MATLAB M-file (let's call it res.m) and I want to plot the results visually.
I want to start the simulation with C++ many times in a row and therefore want to automatize the plotting of the results.
I come up to two options:
Execute from C++ an Octave or MATLAB script that generates the graph.
-> Haven't found anyone who managed to do so
Use the Octave source files to read the res.m file and output them after with whatever plotting C++ tool.
-> Theoretically possible but I get lost in those files
Is someone able to solve this? Or has a better, easier approach?
The answer is to execute through the terminal.
I didn't manage to actually run a octave script from my c++ program directly, but there is a way around messing with/through the terminal and a extra Octave file. I used in my cpp:
string = "octave myProgr.m"
const char *command = str.c_str();
system(command);
myProgr.m is the script that plots the res.m file

Identifying a Programming Language

So I have a software program that for reasons that are beyond this post, I will not include but to put it simply, I'd like to "MOD" the original software. The program is launched from a Windows Application named ViaNet.exe with accompanying DLL files such as ViaNetDll.dll. The Application is given an argument such as ./Statup.cat. There is also a WatchDog process that uses the argument ./App.cat instead of the former.
I was able to locate a log file buried in my Windows/Temp folder for the ViaNet.exe Application. Looking at the log it identifies files such as:
./Utility/base32.atc:_Encode32 line 67
./Utilities.atc:MemFun_:Invoke line 347
./Utilities.atc:_ForEachProperty line 380
./Cluster/ClusterManager.atc:ClusterManager:GetClusterUpdates line 1286
./Cluster/ClusterManager.atc:ClusterManager:StopSync line 505
./Cluster/ClusterManager.atc:ConfigSynchronizer:Update line 1824
Going to those file locations reveal files by those names, but not ending with .atc but instead .cat. The log also indicates some sort of Class, Method and Line # but .cat files are in binary form.
Searching the program folder for any files with the extension .atc reveals three -- What I can assume are uncompiled .cat files -- files. Low and behold, once opened it's obviously some sort of source code -- with copyright headers, lol.
global ConfigFolder, WriteConfigFile, App, ReadConfigFile, CreateAssocArray;
local mgrs = null;
local email = CreateAssocArray( null);
local publicConfig = ReadConfigFile( App.configPath + "\\publicConfig.dat" );
if ( publicConfig != null )
{
mgrs = publicConfig.cluster.shared.clusterGroup[1].managers[1];
local emailInfo = publicConfig.cluster.shared.emailServer;
if (emailInfo != null)
{
if (emailInfo.serverName != "")
{
email.serverName = emailInfo.serverName;
}
if (emailInfo.serverEmailAddress != "")
{
email.serverEmailAddress = emailInfo.serverEmailAddress;
}
if (emailInfo.adminEmailAddress != null)
{
email.adminEmailAddress = emailInfo.adminEmailAddress;
}
}
}
if (mgrs != null)
{
WriteConfigFile( ConfigFolder + "ZoneInfo.dat", mgrs);
}
WriteConfigFile( ConfigFolder + "EmailInfo.dat", email);
So to end this as simply as possible, I'm trying to find out two things. #1 What Programming Language is this? and #2 Can the .cat be decompiled back to .atc. files? -- and vice versa. Looking at the log it would appear that the Application is decoding/decompiling the .cat files already to interpret them verses running them as bytecode/natively. Searching for .atc on Google results in AutoCAD. But looking at the results, shows it to be some sort of palette files, nothing source code related.
It would seem to me that if I can program in this unknown language, let alone, decompile the existing stuff, I might get lucky with modding the software. Thanks in advance for any help and I really really hope someone has an answer for me.
EDIT
So huge news people, I've made quite an interesting discovery. I downloaded a patch from the vendor, it contained a batch file that was executing ViaNet.exe Execute [Patch Script].atc. I quickly discovered that you can use Execute to run both .atc and .cat files equally, same as with no argument. Once knowing this I assumed that there must be various arguments you can try, well after a random stroke of luck, there is one. That being Compile [Script].atc. This argument will compile also any .atc file to .cat. I've compiled the above script for comparison: http://pastebin.com/rg2YM8Q9
So I guess the goal now is to determine if it's possible to decompile said script. So I took a step further and was successful at obtaining C++ pseudo code from the ViaNet.exe and ViaNetDll.dll binaries, this has shed tons of understanding on the proprietary language and it's API they use. From what I can tell each execution is decompiled first then ran thru the interpreter. They also have nicknamed their language ATCL, still no idea what it stands for. While searching the API, I found several debug methods with names like ExecuteFile, ExecuteString, CompileFile, CompileString, InspectFunction and finally DumpObjCode. With the DumpObjCode method I'm able to perform some sort of dump of script files. Dump file for above script: http://pastebin.com/PuCCVMPf
I hope someone can help me find a pattern with the progress I made. I'm trying my best to go over the pseudo code but I don't know C++, so I'm having a really hard time understanding the code. I've tried to seperate what I can identify as being the compile script subroutines but I'm not certain: http://pastebin.com/pwfFCDQa
If someone can give me an idea of what this code snippet is doing and if it looks like I'm on the right path, I'd appreciate it. Thank you in advanced.

How to properly debug a binary generated by `go test -c` using GDB?

The go test command has support for the -c flag, described as follows:
-c Compile the test binary to pkg.test but do not run it.
(Where pkg is the last element of the package's import path.)
As far as I understand, generating a binary like this is the way to run it interactively using GDB. However, since the test binary is created by combining the source and test files temporarily in some /tmp/ directory, this is what happens when I run list in gdb:
Loading Go Runtime support.
(gdb) list
42 github.com/<username>/<project>/_test/_testmain.go: No such file or directory.
This means I cannot happily inspect the Go source code in GDB like I'm used to. I know it is possible to force the temporary directory to stay by passing the -work flag to the go test command, but then it is still a huge hassle since the binary is not created in that directory and such. I was wondering if anyone found a clean solution to this problem.
Go 1.5 has been released, and there is still no officially sanctioned Go debugger. I haven't had much success using GDB for effectively debugging Go programs or test binaries. However, I have had success using Delve, a non-official debugger that is still undergoing development: https://github.com/derekparker/delve
To run your test code in the debugger, simply install delve:
go get -u github.com/derekparker/delve/cmd/dlv
... and then start the tests in the debugger from within your workspace:
dlv test
From the debugger prompt, you can single-step, set breakpoints, etc.
Give it a whirl!
Unfortunately, this appears to be a known issue that's not going to be fixed. See this discussion:
https://groups.google.com/forum/#!topic/golang-nuts/nIA09gp3eNU
I've seen two solutions to this problem.
1) create a .gdbinit file with a set substitute-path command to
redirect gdb to the actual location of the source. This file could be
generated by the go tool but you'd risk overwriting someone's custom
.gdbinit file and would tie the go tool to gdb which seems like a bad
idea.
2) Replace the source file paths in the executable (which are pointing
to /tmp/...) with the location they reside on disk. This is
straightforward if the real path is shorter then the /tmp/... path.
This would likely require additional support from the compiler /
linker to make this solution more generic.
It spawned this issue on the Go Google Code issue tracker, to which the decision ended up being:
https://code.google.com/p/go/issues/detail?id=2881
This is annoying, but it is the least of many annoying possibilities.
As a rule, the go tool should not be scribbling in the source
directories, which might not even be writable, and it shouldn't be
leaving files elsewhere after it exits. There is next to nothing
interesting in _testmain.go. People testing with gdb can break on
testing.Main instead.
Russ
Status: Unfortunate
So, in short, it sucks, and while you can work around it and GDB a test executable, the development team is unlikely to make it as easy as it could be for you.
I'm still new to the golang game but for what it's worth basic debugging seems to work.
The list command you're trying to work can be used so long as you're already at a breakpoint somewhere in your code. For example:
(gdb) b aws.go:54
Breakpoint 1 at 0x61841: file /Users/mat/gocode/src/github.com/stellar/deliverator/aws/aws.go, line 54.
(gdb) r
Starting program: /Users/mat/gocode/src/github.com/stellar/deliverator/aws/aws.test
[snip: some arnings about BinaryCache]
Breakpoint 1, github.com/stellar/deliverator/aws.imageIsNewer (latest=0xc2081fe2d0, ami=0xc2081fe3c0, ~r2=false)
at /Users/mat/gocode/src/github.com/stellar/deliverator/aws/aws.go:54
54 layout := "2006-01-02T15:04:05.000Z"
(gdb) list
49 func imageIsNewer(latest *ec2.Image, ami *ec2.Image) bool {
50 if latest == nil {
51 return true
52 }
53
54 layout := "2006-01-02T15:04:05.000Z"
55
56 amiCreationTime, amiErr := time.Parse(layout, *ami.CreationDate)
57 if amiErr != nil {
58 panic(amiErr)
This is just after running the following in the aws subdir of my project:
go test -c
gdb aws.test
As an additional caveat, it does seem very selective about where breakpoints can be placed. Seems like it has to be an expression but that conclusion is only via experimentation.
If you're willing to use tools besides GDB, check out godebug. To use it, first install with:
go get github.com/mailgun/godebug
Next, insert a breakpoint somewhere by adding the following statement to your code:
_ = "breakpoint"
Now run your tests with the godebug test command.
godebug test
It supports many of the parameters from the go test command.
-test.bench string
regular expression per path component to select benchmarks to run
-test.benchmem
print memory allocations for benchmarks
-test.benchtime duration
approximate run time for each benchmark (default 1s)
-test.blockprofile string
write a goroutine blocking profile to the named file after execution
-test.blockprofilerate int
if >= 0, calls runtime.SetBlockProfileRate() (default 1)
-test.count n
run tests and benchmarks n times (default 1)
-test.coverprofile string
write a coverage profile to the named file after execution
-test.cpu string
comma-separated list of number of CPUs to use for each test
-test.cpuprofile string
write a cpu profile to the named file during execution
-test.memprofile string
write a memory profile to the named file after execution
-test.memprofilerate int
if >=0, sets runtime.MemProfileRate
-test.outputdir string
directory in which to write profiles
-test.parallel int
maximum test parallelism (default 4)
-test.run string
regular expression to select tests and examples to run
-test.short
run smaller test suite to save time
-test.timeout duration
if positive, sets an aggregate time limit for all tests
-test.trace string
write an execution trace to the named file after execution
-test.v
verbose: print additional output

Wmic /format switch invalid XSL?

I have a quick question, should be relatively simple for those who have some more experience in WMI-command processor than I do (and since I'm an absolute beginner thats not hard :-) )
I fail to understand why wmic /format switch works the way it does. I open up cmd.exe and type
wmic process list brief /format:htable > processlist.html
this does exactly what I want and no bothers further on. Whereas if I go to wmic processor, and try to execute the same command exactly as above...
wmic:root\cli>process list brief /format:htable > processlist.html
I receive the error tag: "Invalid XSL format (or) file name."
Here goes the screenshot. Note I have already copied XSL files from wbem to sys32 dir
Can someone explain to me why these 2 commands that for me look exactly the same, with the only difference that one is executed outside wmic environment and the other one is from inside, the latter one doesn't work? I just fail to understand it.
Please advise so I can comprehend this a bit better! :-)
Try this
copy /y %WINDIR%\system32\wbem\en-US\*.xsl %WINDIR%\system32\
And then
wmic:root\cli>process list brief /format:htable.xsl > processlist.html
Note the presence of the extension after "htable"
You are attempting to use CMD.EXE > redirection while you are within the interactive WMIC context. That can't work.
You can use the WMIC /output:filename switch while in interactive mode. Each subsequent command will overwrite the output of the previous command. You can get multiple commands to go to the same file by using /append:filename instead. You can reset the output back to stdout using /output:stdout.
/output:processlist.html
process list brief /format:htable
/output:stdout
Did you try specifying a full path in the wmic:root\cli>process call? My bets are that the first worked because it output the file to the current directory.

Is there a TeX API for C++?

I want to preview TeX formulas in my User Interface. After a long time searching, it seems to me that there is no other possibility than
write the formula into a .tex file
call tex with system() and write a dvi file
call e.g. dvipng with system() and write a png file
load this file into the GUI
clean up(erase all these files).
I think that the performance of this way doing it is not a problem, since there are only formulas to render and not whole documents. But setting up the environment automatically for the TeX system seems to be a bigger problem.
So, is there a possibility to include TeX as an API in my program?
Thanks a lot!
Couldn't you encapsulate those steps in a single shell script (i.e. which takes the formula and png filename as arguments)? The script could then also handle setting up the environment for TeX. Your program just calls the script with the system() call.
There's a C API for TeX called MimeTeX but the resulting image is... not a nice as it could be.
If you're OK with Java, there's JLatexMath
And if you'd like a WPF version, one is under development at WPFMath
I'm not sure, but think MathType's Component will overkill.
Also have a look at sideshare and see flash video to get more information about sitmo, mathMagig, Edoboard and their API tools.
good lucks.
For Edoboard and Tutorsbox.com we do the following:
Keep a blacklist of LaTeX commands to avoid:
TEX_BLACKLIST = ["\\def", "\\let", "\\futurelet",
"\\newcommand", "\\renewcommand", "\\else", "\\fi", "\\write",
"\\input", "\\include", "\\chardef", "\\catcode", "\\makeatletter",
"\\noexpand", "\\toksdef", "\\every", "\\errhelp", "\\errorstopmode",
"\\scrollmode", "\\nonstopmode", "\\batchmode", "\\read", "\\csname",
"\\newhelp", "\\relax", "\\afterground", "\\afterassignment",
"\\expandafter", "\\noexpand", "\\special", "\\command", "\\loop",
"\\repeat", "\\toks", "\\output", "\\line", "\\mathcode", "\\name",
"\\item", "\\section", "\\mbox", "\\DeclareRobustCommand", "\\[", "\\]"];
We then do system call "latex and textopng".
That as an API REST plus some caching and here you go :)
As an upgrade we will soon convert those LaTeX images as SVG.
LyX is a TeX based Document Processor. As the application is open source you can inspect the C++ code to see how they deal with the problem you described.