Background
I have a C++ extension which runs a 3D watershed pass on a buffer. It's got a nice Cython wrapper to initialise a massive buffer of signed chars to represent the voxels. I initialise some native data structures in python (in a compiled cython file) and then call one C++ function to initialise the buffer, and another to actually run the algorithm (I could have written these in Cython too, but I'd like it to work as a C++ library as well without a python.h dependancy.)
Weirdness
I'm in the process of debugging my code, trying different image sizes to gauge RAM usage and speed, etc, and I've noticed something very strange about the results - they change depending on whether I use python test.py (specifically /usr/bin/python on Mac OS X 10.7.5/Lion, which is python 2.7) or python and running import test, and calling a function on it (and indeed, on my laptop (OS X 10.6.latest, with macports python 2.7) the results are also deterministically different - each platform/situation is different, but each one is always the same as itself.). In all cases, the same function is called, loads some input data from a file, and runs the C++ module.
A note on 64-bit python - I am not using distutils to compile this code, but something akin to my answer here (i.e. with an explicit -arch x86_64 call). This shouldn't mean anything, and all my processes in Activity Monitor are called Intel (64-bit).
As you may know, the point of watershed is to find objects in the pixel soup - in 2D it's often used on photos. Here, I'm using it to find lumps in 3D in much the same way - I start with some lumps ("grains") in the image and I want to find the inverse lumps ("cells") in the space between them.
The way the results change is that I literally find a different number of lumps. For exactly the same input data:
python test.py:
grain count: 1434
seemed to have 8000000 voxels, with average value 0.8398655
find cells:
running watershed algorithm...
found 1242 cells from 1434 original grains!
...
however,
python, import test, test.run():
grain count: 1434
seemed to have 8000000 voxels, with average value 0.8398655
find cells:
running watershed algorithm...
found 927 cells from 1434 original grains!
...
This is the same in the interactive python shell and bpython, which I originally thought was to blame.
Note the "average value" number is exactly the same - this indicates that the same fraction of voxels have initially been marked as in the problem space - i.e. that my input file was initialised in (very very probably) exactly the same way both times in voxel-space.
Also note that no part of the algorithm is non-deterministic; there are no random numbers or approximations; subject to floating point error (which should be the same each time) we should be performing exactly the same computations on exactly the same numbers both times. Watershed runs using a big buffer of integers (here signed chars) and the results are counting clusters of those integers, all of which is implemented in one big C++ call.
I have tested the __file__ attribute of the relevant module objects (which are themselves attributes of the imported test), and they're pointing to the same installed watershed.so in my system's site-packages.
Questions
I don't even know where to begin debugging this - how is it possible to call the same function with the same input data and get different results? - what about interactive python might cause this (e.g. by changing the way the data is initialised)? - Which parts of the (rather large) codebase are relevant to these questions?
In my experience it's much more useful to post ALL the code in a stackoverflow question, and not assume you know where the problem is. However, that is thousands of lines of code here, and I have literally no idea where to start! I'm happy to post small snippets on request.
I'm also happy to hear debugging strategies - interpreter state that I can check, details about the way python might affect an imported C++ binary, and so on.
Here's the structure of the code:
project/
clibs/
custom_types/
adjacency.cpp (and hpp) << graph adjacency (2nd pass; irrelevant = irr)
*array.c (and h) << dynamic array of void*s
*bit_vector.c (and h) << int* as bitfield with helper functions
polyhedron.cpp (and hpp) << for voxel initialisation; convex hull result
smallest_ints.cpp (and hpp) << for voxel entity affiliation tracking (irr)
custom_types.cpp (and hpp) << wraps all files in custom_types/
delaunay.cpp (and hpp) << marshals calls to stripack.f90
*stripack.f90 (and h) << for computing the convex hulls of grains
tensors/
*D3Vector.cpp (and hpp) << 3D double vector impl with operators
watershed.cpp (and hpp) << main algorithm entry points (ini, +two passes)
pywat/
__init__.py
watershed.pyx << cython class, python entry points.
geometric_graph.py << python code for post processing (irr)
setup.py << compile and install directives
test/
test.py << entry point for testing installed lib
(files marked * have been used extensively in other projects and are very well tested, those suffixed irr contain code only run after the problem has been caused.)
Details
as requested, the main stanza in test/test.py:
testfile = 'valid_filename'
if __name__ == "__main__":
# handles segfaults...
import faulthandler
faulthandler.enable()
run(testfile)
and my interactive invocation looks like:
import test
test.run(test.testfile)
Clues
when I run this at the straight interpreter:
import faulthandler
faulthandler.enable()
import test
test.run(test.testfile)
I get the results from the file invocation (i.e. 1242 cells), although when I run it in bpython, it just crashes.
This is clearly the source of the problem - hats off to Ignacio Vazquez-Abrams for asking the right question straight away.
UPDATE:
I've opened a bug on the faulthandler github and I'm working towards a solution. If I find something that people can learn from I'll post it as an answer.
After debugging this application extensively (printf()ing out all the data at multiple points during the run, piping outputs to log files, diffing the log files) I found what seemed to cause the strange behaviour.
I was using uninitialised memory in a couple of places, and (for some bizarre reason) this gave me repeatable behaviour differences between the two cases I describe above - one without faulthandler and one with.
Incidentally, this is also why the bug disappeared from one machine but continued to manifest itself on another, part way through debugging (which really should have given me a clue!)
My mistake here was to assume things about the problem based on a spurious correlation - in theory the garbage ram should have been differently random each time I accessed it (ahh, theory.) In this case I would have been quicker finding the problem with a printout of the main calculation function and a rubber duck.
So, as usual, the answer is the bug is not in the library, it is somewhere in your code - in this case, it was my fault for malloc()ing a chunk of RAM, falsely assuming that other parts of my code were going to initialise it (which they only did sometimes.)
Related
I've been having an issue with a certain function call in the
dphaseWeighted = af::convolve(dphaseWeighted, m_slowTimeFilter);
which seem to produce nothing but nan's.
The back ground is we have recently switched from using AF OpenCL to AF Cuda and the problem that we are seeing happens in the function.
dphaseWeighted = af::convolve(dphaseWeighted, m_slowTimeFilter);
This seems to work well when using OpenCL.
Unfortunatley, I can't give you the whole function because of IP. only a couple of snippets.
This convolve lies deep with in a phase extract piece of code. and is actualy the second part of that code which uses the af::convolve funtion.
The first function seems to behave as expected, with sensible floating point data out.
but then when it comes to the second function all I'm seeing is nan's coming out ( I view that with af_print amd dumping the data to a file.
in the CMakeList I include
include_directories(${ArrayFire_INCLUDE_DIRS})
and
target_link_libraries(DASPhaseInternalLib ${ArrayFire_CUDA_LIBRARIES})
and it builds as expected.
Has anyone experience any think like this before?
Is there a way to preload libraries for the R instance that rpy2 talks to? I am spending 25-30% of my response time (about .5s per chart) in importr calls to lattice or grdevices, and would like to cut down if possible.
Code snippet:
grdevices = importr('grDevices')
importr('lattice')
imagefile = File(open('1d_%s.png' % str(uuid4()), 'w'))
grdevices.png(file=imagefile.name, type='cairo',width=400,height=350)
rcmd="""
print(
xyplot(yvec~xvec,labels=labels,type=c('p','r'),
ylab='%s',xlab='%s'
)
)"""% (y_lab, x_lab)
robjects.r(rcmd)
grdevices.dev_off()
imagefile.close()
If I do not invoke importr("lattice"), robjects.r freaks at the "xyplot(..." call I make later. Can I use R_PROFILE or R_ENVIRON_USER to speed up the lattice and grdevices calls?
importr is a pretty high level function, trading performance for ease-of-use. It does a lot beside just loading an R package. It also maps all R objects in that package to Python (rpy2) objects. That effort is lost when doing importr('lattice') in your script, if the result is not used.
Beside that, importing packages in R itself is not without a cost (for the larger R packages with S4 class definitions, you this can be noticeable when the script is short). rpy2 can't do much about this.
Using R variables such as R_PROFILE is possible, but this was not enabled by default before very recently. How to enable it is on SO (here).
Now, here importr is taking "only" 25% of the response time. Optimization efforts focusing on this will not be able to make it more than 25% faster (and that's a very optimistic limit). Interpolating data into a string to evaluate it as R code after that is not very optimal (as warned in the documentation for rpy2 ). Consider calling the R function through rpy2, passing the data as anything exporting the buffer interface (for example).
I'm trying to use libPd, the wrapper for PureData.
But the documentation is poor and I'm not very into C++
Do you know how I can simply send a floating value to a Pd patch?
Do I need to install libPd or I can just include the files?
First of all, check out ofxpd. It has an excellent libpd implementaiton with OpenFrameworks. If you are starting with C++ you may want to start with OpenFrameworks since it has some great documentation and nice integration with Pd via the ofxpd extension.
There are two good references for getting started with libpd (though neither cover C++ in too much detail): the original article and Peter Brinkmann's book.
On the libpd wiki there is a page for getting started with libpd. The linked project at the bottom has some code snippets in main.cpp that demonstrate how to send floats to your Pd patch.
pd.sendBang("fromCPP");
pd.sendFloat("fromCPP", 100);
pd.sendSymbol("fromCPP", "test string");
In your Pd patch you'll set up a [receive fromCPP] and then these messages will register in your patch.
In order to get the print output you have to use the receivers from libpd in order to receiver the strings and then do something with them. libpd comes with PdBase, which is a great class for getting libpd up and running. PdBase has sendBang, sendFloat, sendMessage, and also has the receivers set up so that you can get output from your Pd patch.
if you want to send a value to a running instance of Pd (the standalone application), you could do so via Pd's networking facilities.
e.g.
[netreceive 65432 1]
|
[route value]
|
[print]
will receive data sent from the cmdline via:
echo "value 1.234567;" | pdsend 65432 localhost udp
you can also send multiple values at once, e.g.
echo "value 1.234567 3.141592;" | pdsend 65432 localhost udp
if you find pdsend to slow for your purposes (e.g. if you launch the executable for each message you want to send you have a considerable overhead!), you could construct the message directly in your application and use an ordinary UDP-socket to send FUDI-messages to Pd.
FUDI-messages really are simple text strings, with atoms separated by whitespace and a terminating semicolon, e.g.
accelerator 1.23 3.14 2.97; button 1;
you might also considering using OSC, but for this you will need some externals (OSC by mrpeach; net by mrpeach (or iemnet)) on the Pd side.
as for performance, i've been using the latter with complex tracking data (hundreds of values per frame at 125fps) and for streaming multichannel audio, so i don't think this is a problem.
if you are already using libPd and only want to communicate from the host-application, use Adam's solution (but your question is a bit vague about that, so i'm including this answer just in case)
I started to mess with Ypsilon, which is a C++ implementation of Scheme.
It conforms R6RS, features fast garbage collector, supports multi-core CPUs and Unicode but has a LACK of documentation, C++ code examples and comments in the code!
Authors provide it as a standalone console application.
My goal is to use it as a scripting engine in an image processing application.
The source code is well structured, but the structure is unfamiliar.
I spent two weeks penetrating it, and here's what I've found out:
All communication with outer world is done via C++ structures called
ports, they correspond to Scheme ports.
Virtual machine has 3 ports: IN, OUT and ERROR.
Ports can be std-ports (via console), socket-ports,
bytevector-ports, named-file-ports and custom-ports.
Each custom port must provide a filled structure called handlers.
Handlers is a vector containing 6 elements: 1st one is a boolean
(whether
port is textual), and other five are function pointers (onRead, onWrite, onSetPos, onGetPos, onClose).
As far as I understand, I need to implement 3 custom ports (IN, OUT and ERROR).
But for now I can't figure out, what are the input parameters of each function (onRead, onWrite, onSetPos, onGetPos, onClose) in handlers.
Unfortunately, there is neither example of implementing a custom port no example of following stuff:
C++ to Scheme function bindings (provided examples are a bunch of
.scm-files, still unclear what to do on the C++ side).
Compiling and
running bytecode (via bytevector-ports? But how to compile text to
bytecode?).
Summarizing, if anyone provides a C++ example of any scenario mentioned above, it would significantly save my time.
Thanks in advance!
Okay, from what I can read of the source code, here's how the various handlers get called (this is all unofficial, based purely on source code inspection):
Read handler: (lambda (bv off len)): takes a bytevector (which your handler will put the read data into), an offset (fixnum), and a length (fixnum). You should read in up to len bytes, placing those bytes into bv starting at off. Return the number of bytes actually read in (as a fixnum).
Write handler: (lambda (bv off len)): takes a bytevector (which contains the data to write), an offset (fixnum), and a length (fixnum). Grab up to len bytes from bv, starting at off, and write them out. Return the number of bytes actually written (as a fixnum).
Get position handler: (lambda (pos)) (called in text mode only): Allows you to store some data for pos so that a future call to the set position handler with the same pos value will reset the position back to the current position. Return value ignored.
Set position handler: (lambda (pos)): Move the current position to the value of pos. Return value ignored.
Close handler: (lambda ()): Close the port. Return value ignored.
To answer another question you had, about compiling and running "bytecode":
To compile an expression, use compile. This returns a code object.
There is no publicly-exported approach to run this code object. Internally, the code uses run-vmi, but you can't access this from outside code.
Internally, the only place where compiled code is loaded and used is in its auto-compile-cache system.
Have a look at heap/boot/eval.scm for details. (Again, this is not an official response, but based purely on personal experimentation and source code inspection.)
I am using the IBM cplex optimizer to solve an optimization problem and I don't want all terminal prints that the optimizer does. Is there a member that turns this off in the IloCplex or IloModel class? These are the prints about cuts and iterations. Prints to the terminal are expensive and my problem will eventually be on the order of millions of variables and I don't want to waste time with these superfluous outputs. Thanks.
Using cplex/concert, you can completely turn off cplex's logging to the console with
cpx.setOut(env.getNullStream())
Where cpx is an IloCplex object. You can also use the setOut function to redirect logs to a file.
There are several cplex parameters to control what gets logged, for example MIPInterval will set the number of MIP nodes searched between lines. Turning MIPDisplay to 0 will turn off the display of cuts except when new solutions are found, while MIPDisplay of 5 will show detailed information about every lp-subproblem.
Logging-related parameters include MIPInterval MIPDisplay SimDisplay BarDisplay NetDisplay
You set parameters with the setParam function.
cpx.setParam(IloCplex::MIPInterval, 1000)