Passing 2D numpy array to C++ for PDE integration - c++

I have a code that is implementing finite differences method for integration of a certain partial differential equation. As I want to boost the code, I would like to pass the 2D numpy array from my Python code to a C++ function that will implement the integrator. I have read here few questions on this subject, but I wanted to ask here what should be more suitable for this mission, SWIG or Cython ? (or a different method?)

Related

equivalent of wavedec (matlab function) in opencv

I am trying to rewrite a matlab code to cpp and I still blocked with this line :
[c, l]=wavedec(S,4,'Dmey');
Is there something like that in opencv ?
if someone have an idea about it try to share it with and thanks in advance.
Maybe if you would integrate your codes with Python, then PyWavelet might be an option.
I'm seeing the function that you're looking for is in there. That function is called Discrete Meyer (Dmey). Not sure what you're planning to do with that though, maybe you're processing some images or something, but Dmey is OK, not very widely used. You might want to just find some GitHub codes and integrate to whatever you're doing to see if it would work first, and based on those you can also change the details of your currently posted function (might find something more efficient).
1-D wavelet decomposition (wavedec)
In your code, c and l stand for coefficients and level. You're passing level four with a Dmey function. If you'd have one dimensional data, the following map is how your decomposition would look like, roughly I guess:
There are usually two types of decomposition models that are being used in Wavelets, one is called packet which is similar to a Full Binary Tree, from architecture standpoint:
The other one, which is the one you're most likely using, is less computationally expensive, because it does not decompose both branches of a tree, if you will. It'd just do the mathematical decomposition in one branch of the tree. Maybe, these images would shed some lights:
1 D
2 D
Notes:
If you have a working model in MatLab, you might want to see the C/C++ Code Generation in MatLab, will automatically convert MatLab codes to C++.
References:
Images are from Wikipedia or mathworks.com
Wiki
mathworks
Wavelet 2D

Is numpy slower than c++ linear algebra libraries like eigen?

I use it to implement neural networks. I prefer NumPy, because it is more convenient to prepare data with Python; however, I am concerned that NumPy is not as fast as c++ libraries.
NumPy is implemented in C. So most of the time you just call C and for some functionality optimized Fortran functions or subroutines. Therefore, you will get a decent speed with NumPy for many tasks. You need to vectorize your operations. Don't write for loops over NumPy arrays. Of course, hand-optimized C code can be faster. On the other hand, NumPy contains a lot of already optimized algorithms that might be faster than not so optimal C code written by less experienced C programmers.
You can gradually move from Python to C with Cython and/or use Numba
for jit-compilation to machine or gpu code.
I have to say that I think that the other answers here are missing things.
First, as #Mike Muller correctly points out, Python's numerical libraries have C or Fortran (or both) backends, so the performance of pure-Python is almost irrelevant (as opposed to the performance of the backend, which can be significant). In this respect, whether you're manipulating something like MKL through Python or C++ - hardly makes a difference.
There are two differences, though:
On the plus side for Python - it is interactive. This means that, especially in conjunction with something like the IPython Notebook, you can perform an operation and plot the result, perform another operation and plot the result, etc. It's hard to get this effect for exploratory analysis with a compiled language like C++ or Java.
On the minus side for Python - it, and its scientific ecosystem, handle multicores imperfectly, to say the least. This is a fundamental problem of the language itself (read about the GIL).

higher dimensional arrays with runge_kutta4

I want to solve a system of coupled differential equations using boost::numeric::odeint::runge_kutta4. It is a 3D lattice system so it would be natural for me (and convenient) to work with 3D arrays. Is there a way for runge_kutta4 to work with user defined data structures or boost multi_array ?
In principle this is possible. odeint provideds a mechanism to use custom data structures - algebras and operations. Have a look here. Either you use one of the existing algebras and try to adapt your data structure to work with this algebra. Or you implement your own algebra and instantiate the Runge Kutta stepper with it.
You might also want to have a look at a library like Eigen, Mtl4, boost.ublas, or Armadillo. They might have data types for higher order tensors. For example Eigen works very well with odeint.

Should I switch to MTL4 from Eigen if I also need to make use of ODEs?

I'm looking for C++ matrix libraries to work with on a Linux (Fedora) system. The intention is to implement continuous attractor neural networks and such for computational neuroscience. I've already begun using Eigen. However, I realized I need differential equation solvers too for my task and ran into Odeint (which recently seems to have been accepted into boost). Odeint works with MTL4 as this page details. I'm now wondering if I should rewrite my code using MTL4 instead of Eigen to be able to make use of odeint properly.
I've looked on both google and stackoverflow itself but failed to find a comparison
I am pretty sure that Eigen will also work with odeint. So it is up to you if you want to change to MTL.
The documentation of odeint http://headmyshoulder.github.com/odeint-v2/doc/boost_numeric_odeint/odeint_in_detail/state_types__algebras_and_operations.html shows how to adapt an arbitrary type to odeint. I think Eigen supports expression templates (so you can write vector or matrix expression like M1=a*M2+b*M3;) such that you can use odeint with the vector_space_algebra and the default_operations. All you have to do is to adapt the resizing mechanism of odeint. Have a look at the MTL bindings in odeint to see how this works. It is straightforward.

Running C++ code alongside and interacting with Python

So my current project is mostly in Python, but I'm looking to rewrite the most computationally expensive portions in C++ to try and boost performance. Much of this I can achieve via simple functions loaded from DLL files, but not everything. I have a multidimensional array in Python that I want to perform operations on in C++ (particularly A* pathfinding), but I'm not really sure how to translate them, and constantly sending data one piece at a time into a loaded function seems really inefficient (the array's first two dimensions are in the low hundreds, and the functions will need to deal with scores, if not hundreds, of elements in the array at a time).
My idea was to have a class in C++ that creates its own copy of the array at setup (where performance isn't as much of an issue) and has methods that are performed on the array and return data to the main Python program. However, I'm not sure how to accomplish this, and even if this is the proper way to go about such a thing; that would seem to imply having C++ code running parallel to the main Python program, and intuition tells me that's a bad idea.
I don't know much about integrating C++ and Python beyond how to load simple functions via cTypes in Python, so I'd really appreciate some pointers here. Keep in mind I'm relatively new to C++ at all; most of my programming experience is in Python. What would be the best way to fit the two together in this situation?
First and foremost, when you are working with multidimensional arrays in Python, you should really be using NumPy. Chances are that your program is already fast enough when you let NumPy do the number crunching (use Array arithmetic instead of Python for loops).
If this is not enough, consider writing parts of your program using Cython. Cython also supports NumPy arrays and provides a painless way to write C code using Python-like syntax.
If it really must be C++, I highly recommend using Boost.Python. Bridging Python and C++ has never been this easy. Plus, Boost.Python comes with NumPy support as well (boost::numeric::array).
Take a look at Cython.