Currently I am calling an R script from C++ in the following way:
system("PATH C:\\Program Files\\R\\R-3.0.1\\bin\\x64");
system("RScript CommandTest.R");
Where CommandTest.R is my script.
This works, but is slow, since I need a particular package and this method makes the package load on every call.
Is there a way to load the package once and then keep that instance of Rscript open so that I can continue to make calls to it without having to reload the package every time?
PS: I know that the 'better' method is probably to go with Rcpp/Rinside, and I will go down that route if necessary, but I thought it'd be worth asking if there's an easy way to do what I need without it.
It seems like the Rserve package is what you seek. Basically it keeps open a "server" which can be asked to evaluate expressions.
It has options for Java, C++ and communication between one R session and another.
In the documentation, you might want to look at run.Rserve and self.ctrlEval
I'm not aware of a solution to keeping R permanently open, but you can speed up startup by calling R with the --vanilla option. (See Appendix B.1 of Intro to R for more options.)
You could also try accessing the functions using :: to save completely loading the package. (Try profiling to see if that really saves you much time. Is the package load actually the slow part of your analysis?)
Related
Situation:
I'm attempting to get coverage reports for a project that uses both C++ and Python. I'm using LCOV/GCOV for C++, and attempting to use Coverage.py for the python stuff. The only issue is, most of the python code that's being used is simply utility functions being called one function at a time. No initialization, no real life-cycle, or exit. So no real way to use the API to start/stop/save, or use the coverage command line to measure.
With this, I thought the easiest way to accomplish this would be using the sitecustomize.py method like outlined here. I have gotten that to work, and it measures all configured python code as expected. Now I'm looking at how to accomplish this with compiled python code (.pyc).
I can get it to work if I keep source(.py) and (.pyc) in the same directory when running, and then reporting. However, I'm looking for a way to RUN the files and generate the measurement data. Then at a later time point to the actual source files, and run the actual reports. Ideally I wouldn't need the source(.py) files at all, but I haven't found a way to accomplish this.
Objective:
In the end I want to be able to compile the python files(.pyc), install them on the target, and run coverage like stated above. It will generate coverage data files, then pull those files to my host machine which houses the source(.py) .. and do the actual coverage reporting.
Is this possible currently?
[Edit] Thanks to Ned's advice, I looked into the [paths] usage, and it worked exactly how I needed it to.
After defining variables, functions, etc., can you save what you have done on the REPL too an text .clj file?
most people work with the repl through an editor such ad Eclipse/Emacs/vim and that editor has the ability to save the repl, though without some diligence on the developers part this will likely be an incomplete record of what happened. Some of the state of the repl may have come from loading files etc which will be in a different state.
So the short answer is typically not.
In Linux (mine = Ubuntu 16.04.2 LTS) if you are using lein then check for .lein (hidden directory) and look for repl-history. You should find the commands that you have typed or pasted into the REPL. This can be a source for later edit - I use geany...
I am answering the parenthetical part of your question. For me, the Clojure REPL is very useful for testing functions and proving out concepts that take no more than a few lines. I will often put hooks in a module that is not the main, just so I can load a file and run it through a couple of functions. I can also do this from main using the same mindset; that is write a debug function.
I found the Eclipse plugin to be quite useful, but I do not use it much these days, mostly Vim and running the module with one or more special functions and running the main. I don't know of any way to save REPL state.
I wan to generate a C++ classes from a IDL file using MICO in the contxet of CORBA. I download the mico-2.3.13.zip but iI don't know how to use it. Please if someone can help me and thanks all.
The answer would probably be longer that would comfortably fit in a short reply, but here are some pages with helpful starter info.
This class webpage has a mini tutorial using mico
http://www.cs.wichita.edu/~chang/lecture/cs843/program/mico-idl.html
Here's another fairly simple tutorial page
http://people.inf.ethz.ch/roemer/micodoc/node16.html
You first need to compile MICO from the sources. Depending on your operating system and environment this will require different steps. In linux/mac os x they are basically calling the ./configure script and then make if it did not fail. Under windows I think that you can call nmake directly (with some options, read the README files).
After compilation completes (this may take a few minutes) and if everything goes fine, you should have the executables and can use them to create your own CORBA interfaces and services.
I'm looking for a good efficient method for scanning a directory structure for changed files in Windows XP+. Something like how git does it is exactly what I'm looking for, when running a git status it displays all modified files, all new (untracked) files and deleted files very quickly which is exactly what I would like to do.
I have a basic model up and running which performs an initial scan and stores all filenames, size, dates and attributes.
On a subsequent scan it checks if the size, attributes or date have changed and marks as a changed file.
My issue now comes in detecting moved and deleted files. Is there a tried and tested method for this sort of thing? I'm struggling to come up with a good method.
I should mention that it will eventually use ReadDirectoryChangesW to monitor files and alert the user when something changes so a full scan is really a last resort after the initial scan.
Thanks,
J
EDIT: I think I may have described the problem badly. The issue I'm facing is not so much detecting the changes - I have ReadDirectoryChangesW() using IOCP on multiple threads to detected when a change happens, the issue is more what to do with the information. For example, a moved file is reported as a delete followed by a create and a rename comes in 2 parts, old name, followed by new name. So what I'm asking is how to differentiate between the delete as part of a move and an actual delete. I'm guessing buffering the changes and processing batches would be an option but feels messy.
In native code FileSystemWatcher is replaced by ReadDirectoryChangesW. Using this properly is not simple, there is a good baseline to build off here.
I have used this code in a previous job and it worked pretty well. The Win32 API itself (and FileSystemWatcher) are prone to problems that are described in the docs and also discussed in various places online, but impact of those will depending on your use cases.
EDIT: the exact change is indicated in the FILE_NOTIFY_INFORMATION structure that you get back - adds, removals, rename data including old and new name.
I voted Liviu M. up. However, another option if you don't want to use the .NET framework for some reason, would be to use the basic Win32 API call FindFirstChangeNotification.
You can use USN journaling if you are up to it, that is pretty low level (NTFS level) stuff.
Here you can find detailed information and source code included. It is written in C# but most of it is PInvoking C/C++ functions.
I'm trying to write a chat client for a popular network. The original client is proprietary, and is about 15 GB larger than I would like. (To be fair, others call it a game.)
There is absolutely no documentation available for the protocol on the internet, and most search results only come back with the client's scripting interface. I can understand that, since used in the wrong way, it could lead to ruining other people's experience.
I've downloaded the source code of a couple of alternative servers, including the one I want to connect to, but those
contain no documentation other than install instructions
are poorly commented (I did a superficial browsing)
are HUGE (the src folder of the target server contains 12 MB worth of .cpp and .h files), and grep didn't find anything related
I've also tried searching their forums and contacting the maintainers of the server, but so far, no luck.
Packet sniffing isn't likely to help, as the protocol relies heavily on encryption.
At this point, all my hope is my ability to chew through an ungodly amount of code. How do I start?
Edit: A related question.
If your original code is encrypted with some well known library like OpenSSL or Ctypto++ it might be useful to write your wrapper for the main entry points of these libraries, then delagating the call to the actual library. If you make such substitution and build the project successfully, you will be able to trace everything which goes out in the plain text way.
If your project is not using third party encryption libs, hopefully it is still possible to substitute the encryption routines with some wrappers which trace their input and then delegate encryption to the actual code.
Your bet is that usually enctyption is implemented in separate, relatively small number of source files so that should be easier for you to track input/output in these files.
Good luck!
I'd say
find the command that is used to send data through the socket (the call depends on the network library)
find references of this command and unroll from there. If you can modify-recompile the server code, it might help.
On the way, you will be able to log decrypted (or, more likely, not yet encrypted) network activity.
IMO, the best answer is to read the source code of the alternative server. Try using a good C++ IDE to help you. It will make a lot of difference.
It is likely that the protocol related material you need to understand will be limited to a subset of the files. These will contain references to network sockets and things. Start from there and work outwards as far as you need to.
A viable approach is to tackle this as a crypto challenge. That makes it easy, because you control so much.
For instance, you can use a current client to send a known message to the server, and then check server memory for that string. Once you've found out in which object the string ends, it also becomes possible to trace its ancestry through the code. Set a breakpoint on any non-const method of the object, and find the stacktraces. This gives you a live view of how messages arrive at the server, and a list of core functions essential to message processing. You can next find related functions (caller/callee of the functions on your list).