Identifying a Programming Language - c++

So I have a software program that for reasons that are beyond this post, I will not include but to put it simply, I'd like to "MOD" the original software. The program is launched from a Windows Application named ViaNet.exe with accompanying DLL files such as ViaNetDll.dll. The Application is given an argument such as ./Statup.cat. There is also a WatchDog process that uses the argument ./App.cat instead of the former.
I was able to locate a log file buried in my Windows/Temp folder for the ViaNet.exe Application. Looking at the log it identifies files such as:
./Utility/base32.atc:_Encode32 line 67
./Utilities.atc:MemFun_:Invoke line 347
./Utilities.atc:_ForEachProperty line 380
./Cluster/ClusterManager.atc:ClusterManager:GetClusterUpdates line 1286
./Cluster/ClusterManager.atc:ClusterManager:StopSync line 505
./Cluster/ClusterManager.atc:ConfigSynchronizer:Update line 1824
Going to those file locations reveal files by those names, but not ending with .atc but instead .cat. The log also indicates some sort of Class, Method and Line # but .cat files are in binary form.
Searching the program folder for any files with the extension .atc reveals three -- What I can assume are uncompiled .cat files -- files. Low and behold, once opened it's obviously some sort of source code -- with copyright headers, lol.
global ConfigFolder, WriteConfigFile, App, ReadConfigFile, CreateAssocArray;
local mgrs = null;
local email = CreateAssocArray( null);
local publicConfig = ReadConfigFile( App.configPath + "\\publicConfig.dat" );
if ( publicConfig != null )
{
mgrs = publicConfig.cluster.shared.clusterGroup[1].managers[1];
local emailInfo = publicConfig.cluster.shared.emailServer;
if (emailInfo != null)
{
if (emailInfo.serverName != "")
{
email.serverName = emailInfo.serverName;
}
if (emailInfo.serverEmailAddress != "")
{
email.serverEmailAddress = emailInfo.serverEmailAddress;
}
if (emailInfo.adminEmailAddress != null)
{
email.adminEmailAddress = emailInfo.adminEmailAddress;
}
}
}
if (mgrs != null)
{
WriteConfigFile( ConfigFolder + "ZoneInfo.dat", mgrs);
}
WriteConfigFile( ConfigFolder + "EmailInfo.dat", email);
So to end this as simply as possible, I'm trying to find out two things. #1 What Programming Language is this? and #2 Can the .cat be decompiled back to .atc. files? -- and vice versa. Looking at the log it would appear that the Application is decoding/decompiling the .cat files already to interpret them verses running them as bytecode/natively. Searching for .atc on Google results in AutoCAD. But looking at the results, shows it to be some sort of palette files, nothing source code related.
It would seem to me that if I can program in this unknown language, let alone, decompile the existing stuff, I might get lucky with modding the software. Thanks in advance for any help and I really really hope someone has an answer for me.
EDIT
So huge news people, I've made quite an interesting discovery. I downloaded a patch from the vendor, it contained a batch file that was executing ViaNet.exe Execute [Patch Script].atc. I quickly discovered that you can use Execute to run both .atc and .cat files equally, same as with no argument. Once knowing this I assumed that there must be various arguments you can try, well after a random stroke of luck, there is one. That being Compile [Script].atc. This argument will compile also any .atc file to .cat. I've compiled the above script for comparison: http://pastebin.com/rg2YM8Q9
So I guess the goal now is to determine if it's possible to decompile said script. So I took a step further and was successful at obtaining C++ pseudo code from the ViaNet.exe and ViaNetDll.dll binaries, this has shed tons of understanding on the proprietary language and it's API they use. From what I can tell each execution is decompiled first then ran thru the interpreter. They also have nicknamed their language ATCL, still no idea what it stands for. While searching the API, I found several debug methods with names like ExecuteFile, ExecuteString, CompileFile, CompileString, InspectFunction and finally DumpObjCode. With the DumpObjCode method I'm able to perform some sort of dump of script files. Dump file for above script: http://pastebin.com/PuCCVMPf
I hope someone can help me find a pattern with the progress I made. I'm trying my best to go over the pseudo code but I don't know C++, so I'm having a really hard time understanding the code. I've tried to seperate what I can identify as being the compile script subroutines but I'm not certain: http://pastebin.com/pwfFCDQa
If someone can give me an idea of what this code snippet is doing and if it looks like I'm on the right path, I'd appreciate it. Thank you in advanced.

Related

weka: how to generate libsvm training parameter

I am running libsvm through weka. Its output accuracy looks good to me, so I am planning to write a svm model by myself. However, weka didn't generate any training parameter, such as number of support vector. Therefore i cannot do anything. Searching the web, i found somebody said it would generate some parameters like the following:
optimization finished, #iter = 27
nu = 0.058475864943863545
obj = -1.871013102744184, rho = -0.19357337828800944
nSV = 9, nBSV = 0 `enter code here`
Total nSV = 9
but how come i didn't see any of them? any step that i missed? please help me. Thanks a lot.
Weka writes the output you mentioned to stderr.
So if you have started weka.sh or weka.bat from a terminal (or "command window" if you are on Windows), you should see that output appear in your terminal window after clicking "classify"
If you want to have access to this information via scripts, you can
redirect the output to a file and read in that file.
Here is how to edit the startup file weka.sh / weka.bat.
Edit this line (it is probably the last line) in order to write log info to a file instead of the terminal window:
java -cp $CP -Xmx8092m weka.gui.GUIChooser 2>>/opt/weka-stable/weka.log &
You can also add a properties file to your home directory to add more fine-grained behaviour.
https://weka.wikispaces.com/Properties+file
(You probably can also access information via the Weka Java API somehow, but you did not ask for that)

How to properly debug a binary generated by `go test -c` using GDB?

The go test command has support for the -c flag, described as follows:
-c Compile the test binary to pkg.test but do not run it.
(Where pkg is the last element of the package's import path.)
As far as I understand, generating a binary like this is the way to run it interactively using GDB. However, since the test binary is created by combining the source and test files temporarily in some /tmp/ directory, this is what happens when I run list in gdb:
Loading Go Runtime support.
(gdb) list
42 github.com/<username>/<project>/_test/_testmain.go: No such file or directory.
This means I cannot happily inspect the Go source code in GDB like I'm used to. I know it is possible to force the temporary directory to stay by passing the -work flag to the go test command, but then it is still a huge hassle since the binary is not created in that directory and such. I was wondering if anyone found a clean solution to this problem.
Go 1.5 has been released, and there is still no officially sanctioned Go debugger. I haven't had much success using GDB for effectively debugging Go programs or test binaries. However, I have had success using Delve, a non-official debugger that is still undergoing development: https://github.com/derekparker/delve
To run your test code in the debugger, simply install delve:
go get -u github.com/derekparker/delve/cmd/dlv
... and then start the tests in the debugger from within your workspace:
dlv test
From the debugger prompt, you can single-step, set breakpoints, etc.
Give it a whirl!
Unfortunately, this appears to be a known issue that's not going to be fixed. See this discussion:
https://groups.google.com/forum/#!topic/golang-nuts/nIA09gp3eNU
I've seen two solutions to this problem.
1) create a .gdbinit file with a set substitute-path command to
redirect gdb to the actual location of the source. This file could be
generated by the go tool but you'd risk overwriting someone's custom
.gdbinit file and would tie the go tool to gdb which seems like a bad
idea.
2) Replace the source file paths in the executable (which are pointing
to /tmp/...) with the location they reside on disk. This is
straightforward if the real path is shorter then the /tmp/... path.
This would likely require additional support from the compiler /
linker to make this solution more generic.
It spawned this issue on the Go Google Code issue tracker, to which the decision ended up being:
https://code.google.com/p/go/issues/detail?id=2881
This is annoying, but it is the least of many annoying possibilities.
As a rule, the go tool should not be scribbling in the source
directories, which might not even be writable, and it shouldn't be
leaving files elsewhere after it exits. There is next to nothing
interesting in _testmain.go. People testing with gdb can break on
testing.Main instead.
Russ
Status: Unfortunate
So, in short, it sucks, and while you can work around it and GDB a test executable, the development team is unlikely to make it as easy as it could be for you.
I'm still new to the golang game but for what it's worth basic debugging seems to work.
The list command you're trying to work can be used so long as you're already at a breakpoint somewhere in your code. For example:
(gdb) b aws.go:54
Breakpoint 1 at 0x61841: file /Users/mat/gocode/src/github.com/stellar/deliverator/aws/aws.go, line 54.
(gdb) r
Starting program: /Users/mat/gocode/src/github.com/stellar/deliverator/aws/aws.test
[snip: some arnings about BinaryCache]
Breakpoint 1, github.com/stellar/deliverator/aws.imageIsNewer (latest=0xc2081fe2d0, ami=0xc2081fe3c0, ~r2=false)
at /Users/mat/gocode/src/github.com/stellar/deliverator/aws/aws.go:54
54 layout := "2006-01-02T15:04:05.000Z"
(gdb) list
49 func imageIsNewer(latest *ec2.Image, ami *ec2.Image) bool {
50 if latest == nil {
51 return true
52 }
53
54 layout := "2006-01-02T15:04:05.000Z"
55
56 amiCreationTime, amiErr := time.Parse(layout, *ami.CreationDate)
57 if amiErr != nil {
58 panic(amiErr)
This is just after running the following in the aws subdir of my project:
go test -c
gdb aws.test
As an additional caveat, it does seem very selective about where breakpoints can be placed. Seems like it has to be an expression but that conclusion is only via experimentation.
If you're willing to use tools besides GDB, check out godebug. To use it, first install with:
go get github.com/mailgun/godebug
Next, insert a breakpoint somewhere by adding the following statement to your code:
_ = "breakpoint"
Now run your tests with the godebug test command.
godebug test
It supports many of the parameters from the go test command.
-test.bench string
regular expression per path component to select benchmarks to run
-test.benchmem
print memory allocations for benchmarks
-test.benchtime duration
approximate run time for each benchmark (default 1s)
-test.blockprofile string
write a goroutine blocking profile to the named file after execution
-test.blockprofilerate int
if >= 0, calls runtime.SetBlockProfileRate() (default 1)
-test.count n
run tests and benchmarks n times (default 1)
-test.coverprofile string
write a coverage profile to the named file after execution
-test.cpu string
comma-separated list of number of CPUs to use for each test
-test.cpuprofile string
write a cpu profile to the named file during execution
-test.memprofile string
write a memory profile to the named file after execution
-test.memprofilerate int
if >=0, sets runtime.MemProfileRate
-test.outputdir string
directory in which to write profiles
-test.parallel int
maximum test parallelism (default 4)
-test.run string
regular expression to select tests and examples to run
-test.short
run smaller test suite to save time
-test.timeout duration
if positive, sets an aggregate time limit for all tests
-test.trace string
write an execution trace to the named file after execution
-test.v
verbose: print additional output

Why are my files smaller after I FTP them using this Python program?

I'm trying to send some files (a zip and a Word doc) to a directory on a server using ftplib. I have the broad strokes sorted out:
session = ftplib.FTP(ftp.server, 'user','pass')
filewpt = open(file, mode)
readfile = open(file, mode)
session.cwd(new/work/directory)
session.storbinary('STOR filename.zip', filewpt)
session.storbinary('STOR readme.doc', readfile)
print "filename.zip and readme.doc were sent to the folder on ftp"
readfile.close()
filewpt.close()
session.quit()
This may provide someone else what they are after but not me. I have been using FileZilla as a check to make sure the files were transferred. When I see they have made it to the server, I see that they are both way smaller or even zero K for the readme.doc file. Now I'm guessing this has something to do with the fact that I stored the file in 'binary transfer mode' <--- whatever that means.
This is where my problems lie. I have no idea at all (yet) what is meant by binary transfer mode. Is it simply that I have to use retrbinary to return the files to their original state?
Could someone please explain to me like I'm a two year old what has happened to my files? If there's any more info required, please let me know.
This is a fantastic resource. Solved most of my problems. Still trying to work out the intricacies of FTPs, but I guess I will save that for another day. The link below builds a function to effortlessly upload files to an FTP without the partial upload problem that I've seen experienced by more than one Stack Exchanger.
http://effbot.org/librarybook/ftplib.htm

Strange semantic error

I have reinstalled emacs 24.2.50 on a new linux host and started a new dotEmacs config based on magnars emacs configuration. Since I have used CEDET to some success in my previous workflow I started configuring it. However, there is some strange behaviour whenever I load a C++ source file.
[This Part Is Solved]
As expected, semantic parses all included files (and during the initial setup parses all files specified by the semantic-add-system-include variables), but it prints this an error message that goes like this:
WARNING: semantic-find-file-noselect called for /usr/include/c++/4.7/vector while in set-auto-mode for /usr/include/c++/4.7/vector. You should call the responsible function into 'mode-local-init-hook'.
In the above example the error is printed for the STL vector but a corresponding error message is printed for every file included by the one I'm visiting and any subsequent includes. As a result it takes quite a long time to finish and unfortunately the process is repeated any type I open a new buffer.
[This Problem Is Solved Too]
Furthermore it looks like the parsing doesn't really work as when I place the point above a non-c primitive type (i.e. not int,double,float, etc) instead of printing the type's definition in the modeline an error message like
Idle Service Error semantic-idle-local-symbol-highlight-idle-function: "#<buffer DEPFETResolutionAnalysis.cc> - Wrong type argument: stringp, (((0) \"IndexMap\"))"
Idle Service Error semantic-idle-summary-idle-function: "#<buffer DEPFETResolutionAnalysis.cc> - Wrong type argument: stringp, ((\"fXBetween\" 0 nil nil))"
where DEPFETResolutionAnalysis.cc is the file & buffer I'm currently editing and IndexMap and fXBetween are types defined in files included by the file I'm editing/some file included by the file I'm editing.
I have not tested any further features of CEDET/semantic as the problem is pretty annoying. My cedet config can be found here.
EDIT: With the help of Alex Ott I kinda solved the first problem. It was due to my horrible cedet initialisation. See his first answer for the proper way to configure CEDET!
There still remains the problem with the Idle Service Error (which, when enabling global-semantic-idle-local-symbol-highlight-mode, occurs permanently, not only when checking the definition of the type at point).
And there is the new problem of how to disable the site-wise init file(s).
EDIT2: I have executed semantic-debug-idle-function in a buffer where the problem occurs and it produces a ~700kb [sic!] output. It looks like it is performing some operations on a data container which, by the looks of it, contains information on all the symbols defined in the files parsed. As I have parsed a rather large package (~20Mb source files) this table is rather large. Can semantic handle a database that large or is this impossible and the reason of my problem?
EDIT3: Deleting the content of ~/.semanticdb and reparsing all includes did the trick. I still need to disable the site-wise init files but as this is not related to CEDET I will close this question (the question related to the site-wise init files can be found here).
You need to change your init file so it will perform loading of CEDET only once, not in the hook that will be called for each .h/.hpp/.c/.cpp files. You can change this config as the base, and read more in following article.
The problem that you have is caused because Semantic is trying to analyze header files, and when it tries to open them, then its initialization routines are called again, and again...
The first problem was solved by correctly configuring CEDET which is discribed on Alex Ott's homepage. His answer solves this first problem. The config file specified in his answer is a great start for a nice config; I have used the very same to config CEDET for my needs.
The second problem vanished once I updated CEDET from 1.1 to the bazaar (repository) version, which is explained here and in Alex' article. Additionaly one must delete the content of the directory ~/.semanticdb (which contains the semantic database and was corrupted I guess).
I'd like to thank Alex Ott for his help and sticking with me throughout my journey to the solution :)

Eclipse CDT Debugger Issue, v. .metadata does not exist

I am attempting to use the gdb/mi debugger in the Eclipse CDT version 6.02. While I am debugging I can step through the program with ease up until I reach the following snippet of a block of code.
ENUM_START_TYPE My_Class::some_function( const char * c, const char * e)
{
ENUM_START_TYPE result = GENERIC_ENUM_VALUE;
if ( c[0] == '<' )
{
result = do_something()
}
...
MORE CODE
...
return result;
}
When the debugger reaches this line.
if ( c[0] == '<' )
It starts exploring sections of code that it can not find until it opens a tab containing the /projectname/.metadata and simply declares:
"Resource '/project_name/.metadata' does not exist.
At which point the debugger terminates the program with no reason as to why.
All I wish to do is step over this line of code because it really is as trivial as comparing characters.
My question is: Why is this happening? Is it something to do with the debugger, or does it have something to do with my code, or what. Additionally, what is the .metadata and why can't the file be located and opened when it clearly exists (I can locate and open the .metafile without a problem).
Other info that may be pertinent: The files are located on a clearcase snapshot view, but are not checked into source control. I don't think that would cause an error like this, but clear case has caused so many random errors for me that I thought it would be worth mentioning.
Thanks in advance
As I am not aware of any side-effect a snapshot view might have in the process.
A dynamic view could consider part of the directories as "non-selected" (and then non-readable).
You have also the issue of symlink to dynamic view set on drive.
But a snapshot view is nothing more than a working tree on the hard drive.
To rule out any "ClearCase interference", you could try and debug your project entirely copied outside of any view of any sort (based on the content of your current snapshot view), and see if the problem persists.