Preloading for Importr in Django - django

Is there a way to preload libraries for the R instance that rpy2 talks to? I am spending 25-30% of my response time (about .5s per chart) in importr calls to lattice or grdevices, and would like to cut down if possible.
Code snippet:
grdevices = importr('grDevices')
importr('lattice')
imagefile = File(open('1d_%s.png' % str(uuid4()), 'w'))
grdevices.png(file=imagefile.name, type='cairo',width=400,height=350)
rcmd="""
print(
xyplot(yvec~xvec,labels=labels,type=c('p','r'),
ylab='%s',xlab='%s'
)
)"""% (y_lab, x_lab)
robjects.r(rcmd)
grdevices.dev_off()
imagefile.close()
If I do not invoke importr("lattice"), robjects.r freaks at the "xyplot(..." call I make later. Can I use R_PROFILE or R_ENVIRON_USER to speed up the lattice and grdevices calls?

importr is a pretty high level function, trading performance for ease-of-use. It does a lot beside just loading an R package. It also maps all R objects in that package to Python (rpy2) objects. That effort is lost when doing importr('lattice') in your script, if the result is not used.
Beside that, importing packages in R itself is not without a cost (for the larger R packages with S4 class definitions, you this can be noticeable when the script is short). rpy2 can't do much about this.
Using R variables such as R_PROFILE is possible, but this was not enabled by default before very recently. How to enable it is on SO (here).
Now, here importr is taking "only" 25% of the response time. Optimization efforts focusing on this will not be able to make it more than 25% faster (and that's a very optimistic limit). Interpolating data into a string to evaluate it as R code after that is not very optimal (as warned in the documentation for rpy2 ). Consider calling the R function through rpy2, passing the data as anything exporting the buffer interface (for example).

Related

ArrayFire convolution issue with Cuda backend

I've been having an issue with a certain function call in the
dphaseWeighted = af::convolve(dphaseWeighted, m_slowTimeFilter);
which seem to produce nothing but nan's.
The back ground is we have recently switched from using AF OpenCL to AF Cuda and the problem that we are seeing happens in the function.
dphaseWeighted = af::convolve(dphaseWeighted, m_slowTimeFilter);
This seems to work well when using OpenCL.
Unfortunatley, I can't give you the whole function because of IP. only a couple of snippets.
This convolve lies deep with in a phase extract piece of code. and is actualy the second part of that code which uses the af::convolve funtion.
The first function seems to behave as expected, with sensible floating point data out.
but then when it comes to the second function all I'm seeing is nan's coming out ( I view that with af_print amd dumping the data to a file.
in the CMakeList I include
include_directories(${ArrayFire_INCLUDE_DIRS})
and
target_link_libraries(DASPhaseInternalLib ${ArrayFire_CUDA_LIBRARIES})
and it builds as expected.
Has anyone experience any think like this before?

Huge difference in dotTrace and Stopwatch function timings

Profiling performance of a couple functions in my C# application. I was using the .Net Stopwatch to time a function over 20,000 calls. And the timing worked out to around 2.8ms per call.
However when using dotTrace in line by line mode I find that 20,000 calls to my function takes 249,584ms which is ~12.5ms per call.
Now, the function is attached to a dispatch timer, so the Stopwatch was located inside the function and not registering the call itself. Like so:
private static Stopwatch myStop = new Stopwatch();
private MyFunction(object obj, EventArgs e)
{
myStop.Start()
//Code here
myStop.Stop();
Console.WriteLine("Time elapsed for View Update: {0}", myStop.Elapsed);
myStop.Reset();
}
However, I find it hard to believe that the call was taking 10 milliseconds on average.
Is there anything else that could be affecting the timing of the profiler or Stopwatch? Is the dispatch timer event affecting the timing that much?
I've looked through some of the JetBrains forums and wasn't able to find anything related something like this, but I'm sure I could have looked harder and will continue to do so. I do realize that the Stopwatch is unreliable in some ways, but didn't think it could be this much.
It should be noted that this is the first time I've profiled code in C# or .Net.
Short answer: line-by-line profiling have biggest overhead than any other profiling type.
For the line-by-line profiling dotTrace(and others profilers) will insert calls to some profiler's function, lets call it for example GetTime() which calculates time spent from the previous call, sums it and write somewhere in snapshot.
So your function not so fast and simple anymore.
Without profiling your code can look like this:
myStop.Start();
var i = 5;
i++;
myStop.Stop();
And if you start it under profiler it will be like this:
dotTrace.GetTime();
myStop.Start();
dotTrace.GetTime();
var i = 5;
dotTrace.GetTime();
i++;
dotTrace.GetTime();
myStop.Stop()
So 12.5 ms you get includes all this profiler API calls and distorts absolute function time alot. Line-by-line profiling mostly needed to compare relative statements times. So if you want to accurately measure absoulte function times you should use Sampling profiling type.
For more information about profiling types you can refer to dotTrace Profiling Types and comparison of profiling types help pages.

libPd and c++ wrapper implementation

I'm trying to use libPd, the wrapper for PureData.
But the documentation is poor and I'm not very into C++
Do you know how I can simply send a floating value to a Pd patch?
Do I need to install libPd or I can just include the files?
First of all, check out ofxpd. It has an excellent libpd implementaiton with OpenFrameworks. If you are starting with C++ you may want to start with OpenFrameworks since it has some great documentation and nice integration with Pd via the ofxpd extension.
There are two good references for getting started with libpd (though neither cover C++ in too much detail): the original article and Peter Brinkmann's book.
On the libpd wiki there is a page for getting started with libpd. The linked project at the bottom has some code snippets in main.cpp that demonstrate how to send floats to your Pd patch.
pd.sendBang("fromCPP");
pd.sendFloat("fromCPP", 100);
pd.sendSymbol("fromCPP", "test string");
In your Pd patch you'll set up a [receive fromCPP] and then these messages will register in your patch.
In order to get the print output you have to use the receivers from libpd in order to receiver the strings and then do something with them. libpd comes with PdBase, which is a great class for getting libpd up and running. PdBase has sendBang, sendFloat, sendMessage, and also has the receivers set up so that you can get output from your Pd patch.
if you want to send a value to a running instance of Pd (the standalone application), you could do so via Pd's networking facilities.
e.g.
[netreceive 65432 1]
|
[route value]
|
[print]
will receive data sent from the cmdline via:
echo "value 1.234567;" | pdsend 65432 localhost udp
you can also send multiple values at once, e.g.
echo "value 1.234567 3.141592;" | pdsend 65432 localhost udp
if you find pdsend to slow for your purposes (e.g. if you launch the executable for each message you want to send you have a considerable overhead!), you could construct the message directly in your application and use an ordinary UDP-socket to send FUDI-messages to Pd.
FUDI-messages really are simple text strings, with atoms separated by whitespace and a terminating semicolon, e.g.
accelerator 1.23 3.14 2.97; button 1;
you might also considering using OSC, but for this you will need some externals (OSC by mrpeach; net by mrpeach (or iemnet)) on the Pd side.
as for performance, i've been using the latter with complex tracking data (hundreds of values per frame at 125fps) and for streaming multichannel audio, so i don't think this is a problem.
if you are already using libPd and only want to communicate from the host-application, use Adam's solution (but your question is a bit vague about that, so i'm including this answer just in case)

my c++ extension behaves differently with faulthandler

Background
I have a C++ extension which runs a 3D watershed pass on a buffer. It's got a nice Cython wrapper to initialise a massive buffer of signed chars to represent the voxels. I initialise some native data structures in python (in a compiled cython file) and then call one C++ function to initialise the buffer, and another to actually run the algorithm (I could have written these in Cython too, but I'd like it to work as a C++ library as well without a python.h dependancy.)
Weirdness
I'm in the process of debugging my code, trying different image sizes to gauge RAM usage and speed, etc, and I've noticed something very strange about the results - they change depending on whether I use python test.py (specifically /usr/bin/python on Mac OS X 10.7.5/Lion, which is python 2.7) or python and running import test, and calling a function on it (and indeed, on my laptop (OS X 10.6.latest, with macports python 2.7) the results are also deterministically different - each platform/situation is different, but each one is always the same as itself.). In all cases, the same function is called, loads some input data from a file, and runs the C++ module.
A note on 64-bit python - I am not using distutils to compile this code, but something akin to my answer here (i.e. with an explicit -arch x86_64 call). This shouldn't mean anything, and all my processes in Activity Monitor are called Intel (64-bit).
As you may know, the point of watershed is to find objects in the pixel soup - in 2D it's often used on photos. Here, I'm using it to find lumps in 3D in much the same way - I start with some lumps ("grains") in the image and I want to find the inverse lumps ("cells") in the space between them.
The way the results change is that I literally find a different number of lumps. For exactly the same input data:
python test.py:
grain count: 1434
seemed to have 8000000 voxels, with average value 0.8398655
find cells:
running watershed algorithm...
found 1242 cells from 1434 original grains!
...
however,
python, import test, test.run():
grain count: 1434
seemed to have 8000000 voxels, with average value 0.8398655
find cells:
running watershed algorithm...
found 927 cells from 1434 original grains!
...
This is the same in the interactive python shell and bpython, which I originally thought was to blame.
Note the "average value" number is exactly the same - this indicates that the same fraction of voxels have initially been marked as in the problem space - i.e. that my input file was initialised in (very very probably) exactly the same way both times in voxel-space.
Also note that no part of the algorithm is non-deterministic; there are no random numbers or approximations; subject to floating point error (which should be the same each time) we should be performing exactly the same computations on exactly the same numbers both times. Watershed runs using a big buffer of integers (here signed chars) and the results are counting clusters of those integers, all of which is implemented in one big C++ call.
I have tested the __file__ attribute of the relevant module objects (which are themselves attributes of the imported test), and they're pointing to the same installed watershed.so in my system's site-packages.
Questions
I don't even know where to begin debugging this - how is it possible to call the same function with the same input data and get different results? - what about interactive python might cause this (e.g. by changing the way the data is initialised)? - Which parts of the (rather large) codebase are relevant to these questions?
In my experience it's much more useful to post ALL the code in a stackoverflow question, and not assume you know where the problem is. However, that is thousands of lines of code here, and I have literally no idea where to start! I'm happy to post small snippets on request.
I'm also happy to hear debugging strategies - interpreter state that I can check, details about the way python might affect an imported C++ binary, and so on.
Here's the structure of the code:
project/
clibs/
custom_types/
adjacency.cpp (and hpp) << graph adjacency (2nd pass; irrelevant = irr)
*array.c (and h) << dynamic array of void*s
*bit_vector.c (and h) << int* as bitfield with helper functions
polyhedron.cpp (and hpp) << for voxel initialisation; convex hull result
smallest_ints.cpp (and hpp) << for voxel entity affiliation tracking (irr)
custom_types.cpp (and hpp) << wraps all files in custom_types/
delaunay.cpp (and hpp) << marshals calls to stripack.f90
*stripack.f90 (and h) << for computing the convex hulls of grains
tensors/
*D3Vector.cpp (and hpp) << 3D double vector impl with operators
watershed.cpp (and hpp) << main algorithm entry points (ini, +two passes)
pywat/
__init__.py
watershed.pyx << cython class, python entry points.
geometric_graph.py << python code for post processing (irr)
setup.py << compile and install directives
test/
test.py << entry point for testing installed lib
(files marked * have been used extensively in other projects and are very well tested, those suffixed irr contain code only run after the problem has been caused.)
Details
as requested, the main stanza in test/test.py:
testfile = 'valid_filename'
if __name__ == "__main__":
# handles segfaults...
import faulthandler
faulthandler.enable()
run(testfile)
and my interactive invocation looks like:
import test
test.run(test.testfile)
Clues
when I run this at the straight interpreter:
import faulthandler
faulthandler.enable()
import test
test.run(test.testfile)
I get the results from the file invocation (i.e. 1242 cells), although when I run it in bpython, it just crashes.
This is clearly the source of the problem - hats off to Ignacio Vazquez-Abrams for asking the right question straight away.
UPDATE:
I've opened a bug on the faulthandler github and I'm working towards a solution. If I find something that people can learn from I'll post it as an answer.
After debugging this application extensively (printf()ing out all the data at multiple points during the run, piping outputs to log files, diffing the log files) I found what seemed to cause the strange behaviour.
I was using uninitialised memory in a couple of places, and (for some bizarre reason) this gave me repeatable behaviour differences between the two cases I describe above - one without faulthandler and one with.
Incidentally, this is also why the bug disappeared from one machine but continued to manifest itself on another, part way through debugging (which really should have given me a clue!)
My mistake here was to assume things about the problem based on a spurious correlation - in theory the garbage ram should have been differently random each time I accessed it (ahh, theory.) In this case I would have been quicker finding the problem with a printout of the main calculation function and a rubber duck.
So, as usual, the answer is the bug is not in the library, it is somewhere in your code - in this case, it was my fault for malloc()ing a chunk of RAM, falsely assuming that other parts of my code were going to initialise it (which they only did sometimes.)

What core packages should a professional R developer have, and why? [closed]

As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 11 years ago.
What are the specific utilities that can help R developers code and debug more efficiently?
I'm looking to set up an R development environment, and would like an overview of the tools that would be useful to me in crafting a unit testing infrastructure with code coverage, debugging, generation of package files and help files and maybe UML modeling.
Note: Please justify your answers with reasons and examples based on your experience with the tools you recommend. Don't just link.
Related
Recommendations for Windows text editor for R
What IDEs are available for R in Linux?
Tools Commonly used to Program in R
I have written way too many packages, so to keep things manageable I've invested a lot of time in infrastructure packages: packages that help me make my code more robust and help make it easier for others to use. These include:
roxygen2 (with Manuel Eugster and Peter Danenberg), which allows you to keep documentation next to the function it documents, which it makes it much more likely that I'll keep it up to date. roxygen2 also has a number of new features designed to minimise documentation duplication: templates (#template), parameter inheritance (#inheritParams), and function families (#family) to name a few.
testthat automates the testing of my code. This is becoming more and more important as I have less and less time to code: automated tests remember how the function should work, even when I don't.
devtools automates many common development tasks (as Andrie mentioned). The eventual goal for devtools is for it to act like R CMD check that runs continuously in the background and notifies you the instance that something goes wrong.
profr, particularly the unreleased interactive explorer, makes it easy for me to find bottlenecks in my code.
helpr (with Barret Schloerke), which will soon power http://had.co.nz/ggplot2, provides an elegant html interface to R documentation.
Useful R functions:
apropos: I'm always forgetting the names of useful functions, and apropos helps me find them, even if I only remember a fragment
Outside of R:
I use textmate to edit R (and other) files, but I don't think it's really that important. Pick one and learn all it's nooks and crannies.
Spend some time to learn the command line. Anything you can do to automate any part of your workflow will pay off in the long run. Running R from the command line leads to a natural process where each project has it's own instance of R; I often have 2-5 instances of R running at a time.
Use version control. I like git and github. Again, it doesn't matter exactly which system you use, but master it!
Things I wish R had:
code coverage tools
a dependency management framework like rake or jake
better memory profiling tools
a metadata standard for describing data frames (and other data sources)
better tools for describing and rendering tables in a variety of output formats
a package for markdown rendering
As I recall this has been asked before and my answer remains the same: Emacs.
Emacs can
do just about anything you want to do with R thanks to ESS, including
code execution of various snippets (line, region, function, buffer, ...)
inspection of workspaces,
display of variables,
multiple R sessions and easy switching between them
transcript mode for re-running (parts of) previous sessions
access to the help system
and much more
handles Latex with similar ease via the AucTex mode, which helps Sweave for R
has modes for whichever other programming languages you combine with R, be it C/C++, Python, shell, SQL, ... covering automatic indentation and colour highlighting
can access databases with sql-* mode
can work remotely with tramp mode: access remote files as if they were local (uses ssh/scp)
can be ran as a daemon which makes it stateful so you can reconnect to your same Emacs session, be it on the workstation under X11 (or equivalent) or remotely via ssh (with or without X11) or screen.
has org-mode, which together with babel, provides a powerful sweave alternative as discussed in this paper discussing workflow apps for (social) scientists
can run a shell via M-x shell and/or M-x eshell, has nice directory access functionality with dired mode, has ssh mode for remote access
interfaces all source code repositories with ease via specific modes (eg psvn for svn)
is cross-platform just like R so you have similar user-interface experiences on all relevant operating systems
is widely used, widely available and under active development for both code and extensions, see the emacswiki.org site for the latter
<tongueInCheek>is not Eclipse and does not require Java</tongueInCheek>
You can of course combine it with whichever CRAN packages you like: RUnit or testthat, the different profiling support packages, the debug package, ...
Additional tools that are useful:
R CMD check really is your friend as this is what CRAN uses to decide whether you are "in or out"; use it and trust it
the tests/ directory can offer a simplified version of unit tests by saving to-be-compared against output (from a prior R CMD check run), this is useful but proper unit tests are better
particularly for packages with object code, I prefer to launch fresh R sessions and littler makes that easy: r -lfoo -e'bar(1, "ab")' starts an R session, loads the foo package and evaluates the given expression (here a function bar() with two arguments). This, combined with R CMD INSTALL, provides a full test cycle.
Knowledge of, and ability to use, the basic R debugging tools is an essential first step in learning to quickly debug R code. If you know how to use the basic tools you can debug code anywhere without having to need all the extra tools provided in add-on packages.
traceback() allows you to see the call stack leading to an error
foo <- function(x) {
d <- bar(x)
x[1]
}
bar <- function(x) {
stopifnot(is.matrix(x))
dim(x)
}
foo(1:10)
traceback()
yields:
> foo(1:10)
Error: is.matrix(x) is not TRUE
> traceback()
4: stop(paste(ch, " is not ", if (length(r) > 1L) "all ", "TRUE",
sep = ""), call. = FALSE)
3: stopifnot(is.matrix(x))
2: bar(x)
1: foo(1:10)
So we can clearly see that the error happened in function bar(); we've narrowed down the scope of bug hunt. But what if the code generates warnings, not errors? That can be handled by turning warnings into errors via the warn option:
options(warn = 2)
will turn warnings into errors. You can then use traceback() to track them down.
Linked to this is getting R to recover from an error in the code so you can debug what went wrong. options(error = recover) will drop us into a debugger frame whenever an error is raised:
> options(error = recover)
> foo(1:10)
Error: is.matrix(x) is not TRUE
Enter a frame number, or 0 to exit
1: foo(1:10)
2: bar(x)
3: stopifnot(is.matrix(x))
Selection: 2
Called from: bar(x)
Browse[1]> x
[1] 1 2 3 4 5 6 7 8 9 10
Browse[1]> is.matrix(x)
[1] FALSE
You see we can drop into each frame on the call stack and see how the functions were called, what the arguments are etc. In the above example, we see that bar() was passed a vector not a matrix, hence the error. options(error = NULL) resets this behaviour to normal.
Another key function is trace(), which allows you to insert debugging calls into an existing function. The benefit of this is that you can tell R to debug from a particular line in the source:
> x <- 1:10; y <- rnorm(10)
> trace(lm, tracer = browser, at = 10) ## debug from line 10 of the source
Tracing function "lm" in package "stats"
[1] "lm"
> lm(y ~ x)
Tracing lm(y ~ x) step 10
Called from: eval(expr, envir, enclos)
Browse[1]> n ## must press n <return> to get the next line step
debug: mf <- eval(mf, parent.frame())
Browse[2]>
debug: if (method == "model.frame") return(mf) else if (method != "qr") warning(gettextf("method = '%s' is not supported. Using 'qr'",
method), domain = NA)
Browse[2]>
debug: if (method != "qr") warning(gettextf("method = '%s' is not supported. Using 'qr'",
method), domain = NA)
Browse[2]>
debug: NULL
Browse[2]> Q
> untrace(lm)
Untracing function "lm" in package "stats"
This allows you to insert the debugging calls at the right point in the code without having to step through the proceeding functions calls.
If you want to step through a function as it is executing, then debug(foo) will turn on the debugger for function foo(), whilst undebug(foo) will turn off the debugger.
A key point about these options is that I haven't needed to modify/edit any source code to insert debugging calls etc. I can try things out and see what the problem is directly from the session where there error has occurred.
For a different take on debugging in R, see Mark Bravington's debug package on CRAN