interpolation of surfaces / fitting surfaces c++ library - c++

I need a library in C/C++ to perform interpolation of surfaces f (x, y) = z. As an alternative to the existing on matlab (e.g. GRIDDATA).
For example: nearest-neighbor, linear, bilinear, bicubic, etc.
any suggestions?

Numerical Recipes covers all that stuff, you could always write your own!

Try xfarbe.

Related

Matlab griddata equivalent in C++

I am looking for a C++ equivalent to Matlab's griddata function, or any 2D global interpolation method.
I have a C++ code that uses Eigen 3. I will have an Eigen Vector that will contain x,y, and z values, and two Eigen matrices equivalent to those produced by Meshgrid in Matlab. I would like to interpolate the z values from the Vectors onto the grid points defined by the Meshgrid equivalents (which will extend past the outside of the original points a bit, so minor extrapolation is required).
I'm not too bothered by accuracy--it doesn't need to be perfect. However, I cannot accept NaN as a solution--the interpolation must be computed everywhere on the mesh regardless of data gaps. In other words, staying inside the convex hull is not an option.
I would prefer not to write an interpolation from scratch, but if someone wants to point me to pretty good (and explicit) recipe I'll give it a shot. It's not the most hateful thing to write (at least in an algorithmic sense), but I don't want to reinvent the wheel.
Effectively what I have is scattered terrain locations, and I wish to define a rectilinear mesh that nominally follows some distance beneath the topography for use later. Once I have the node points, I will be good.
My research so far:
The question asked here: MATLAB functions in C++ produced a close answer, but unfortunately the suggestion was not free (SciMath).
I have tried understanding the interpolation function used in Generic Mapping Tools, and was rewarded with a headache.
I briefly looked into the Grid Algorithms library (GrAL). If anyone has commentary I would appreciate it.
Eigen has an unsupported interpolation package, but it seems to just be for curves (not surfaces).
Edit: VTK has a matplotlib functionality. Presumably there must be an interpolation used somewhere in that for display purposes. Does anyone know if that's accessible and usable?
Thank you.
This is probably a little late, but hopefully it helps someone.
Method 1.) Octave: If you're coming from Matlab, one way is to embed the gnu Matlab clone Octave directly into the c++ program. I don't have much experience with it, but you can call the octave library functions directly from a cpp file.
See here, for instance. http://www.gnu.org/software/octave/doc/interpreter/Standalone-Programs.html#Standalone-Programs
griddata is included in octave's geometry package.
Method 2.) PCL: They way I do it is to use the point cloud library (http://www.pointclouds.org) and VoxelGrid. You can set x, and y bin sizes as you please, then set a really large z bin size, which gets you one z value for each x,y bin. The catch is that x,y, and z values are the centroid for the points averaged into the bin, not the bin centers (which is also why it works for this). So you need to massage the x,y values when you're done:
Ex:
//read in a list of comma separated values (x,y,z)
FILE * fp;
fp = fopen("points.xyz","r");
//store them in PCL's point cloud format
pcl::PointCloud<pcl::PointXYZ>::Ptr basic_cloud_ptr (new pcl::PointCloud<pcl::PointXYZ>);
int numpts=0;
double x,y,z;
while(fscanf(fp, "%lg, %lg, %lg", &x, &y, &z)!=EOF)
{
pcl::PointXYZ basic_point;
basic_point.x = x; basic_point.y = y; basic_point.z = z;
basic_cloud_ptr->points.push_back(basic_point);
}
fclose(fp);
basic_cloud_ptr->width = (int) basic_cloud_ptr->points.size ();
basic_cloud_ptr->height = 1;
// create object for result
pcl::PointCloud<pcl::PointXYZ>::Ptr cloud_filtered(new pcl::PointCloud<pcl::PointXYZ>());
// create filtering object and process
pcl::VoxelGrid<pcl::PointXYZ> sor;
sor.setInputCloud (basic_cloud_ptr);
//set the bin sizes here. (dx,dy,dz). for 2d results, make one of the bins larger
//than the data set span in that axis
sor.setLeafSize (0.1, 0.1, 1000);
sor.filter (*cloud_filtered);
So that cloud_filtered is now a point cloud that contains one point for each bin. Then I just make a 2-d matrix and go through the point cloud assigning points to their x,y bins if I want an image, etc. as would be produced by griddata. It works pretty well, and it's much faster than matlab's griddata for large datasets.

How to add matrices with alglib?

I already know how to multiply two matrices with alglib, using rmatrixgemm (see this question).
Is there a way to compute a linear combination of two matrices without using this function, setting B to the identity ? It wouldn't be very efficient.
Alglib provides tons of complex algorithms but I can't find such a basic function.
The manual is here.
I think rmatrixgencopy is ALGLIB's way of providing matrix addition when you set the alpha and beta input parameters both to 1.
rmatrixgencopy (C++)
rmatrixgencopy (C#)
you might be able just to use alglib.cmatrixgemm to do Addition.
This subroutine calculates C = alpha*op1(A)op2(B) +betaC where:
C is MxN general matrix
op1(A) is MxK matrix
op2(B) is KxN matrix
"op" may be identity transformation, transposition, conjugate transposition.
If you want to do C = A + C, you just need to set: B = Identity, alpha = 1, beta = 1, op = identity transformation.
Why don't you try using another library that was created for the purpose of matrix math such as MTL4?
http://www.simunova.com/en/node/24
Manual - http://www.simunova.com/node/148

OpenCV, C++: Distance between two points

For a group project, we are attempting to make a game, where functions are executed whenever a player forms a set of specific hand gestures in front of a camera. To process the images, we are using Open-CV 2.3.
During the image-processing we are trying to find the length between two points.
We already know this can be done very easily with Pythagoras law, though it is known that Pythagoras law requires much computer power, and we wish to do this as low-resource as possible.
We wish to know if there exist any build-in function within Open-CV or standard library for C++, which can handle low-resource calculations of the distance between two points.
We have the coordinates for the points, which are in pixel values (Of course).
Extra info:
Previous experience have taught us, that OpenCV and other libraries are heavily optimized. As an example, we attempted to change the RGB values of the live image feed from the camera with a for loop, going through each pixel. This provided with a low frame-rate output. Instead we decided to use an Open-CV build-in function instead, which instead gave us a high frame-rate output.
You should try this
cv::Point a(1, 3);
cv::Point b(5, 6);
double res = cv::norm(a-b);//Euclidian distance
As you correctly pointed out, there's an OpenCV function that does some of your work :)
(Also check the other way)
It is called magnitude() and it calculates the distance for you. And if you have a vector of more than 4 vectors to calculate distances, it will use SSE (i think) to make it faster.
Now, the problem is that it only calculate the square of the powers, and you have to do by hand differences. (check the documentation). But if you do them also using OpenCV functions it should be fast.
Mat pts1(nPts, 1, CV_8UC2), pts2(nPts, 1, CV_8UC2);
// populate them
Mat diffPts = pts1-pts2;
Mat ptsx, ptsy;
// split your points in x and y vectors. maybe separate them from start
Mat dist;
magnitude(ptsx, ptsy, dist); // voila!
The other way is to use a very fast sqrt:
// 15 times faster than the classical float sqrt.
// Reasonably accurate up to root(32500)
// Source: http://supp.iar.com/FilesPublic/SUPPORT/000419/AN-G-002.pdf
unsigned int root(unsigned int x){
unsigned int a,b;
b = x;
a = x = 0x3f;
x = b/x;
a = x = (x+a)>>1;
x = b/x;
a = x = (x+a)>>1;
x = b/x;
x = (x+a)>>1;
return(x);
}
This ought to a comment, but I haven't enough rep (50?) |-( so I post it as an answer.
What the guys are trying to tell you in the comments of your questions is that if it's only about comparing distances, then you can simply use
d=(dx*dx+dy*dy) = (x1-x2)(x1-x2) + (y1-y2)(y1-y2)
thus avoiding the square root. But you can't of course skip the square elevation.
Pythagoras is the fastest way, and it really isn't as expensive as you think. It used to be, because of the square-root. But modern processors can usually do this within a few cycles.
If you really need speed, use OpenCL on the graphics card for image processing.

Perlin's Noise with OpenGL

I was studying Perlin's Noise through some examples # http://dindinx.net/OpenGL/index.php?menu=exemples&submenu=shaders and couldn't help to notice that his make3DNoiseTexture() in perlin.c uses noise3(ni) instead of PerlinNoise3D(...)
Now why is that? Isn't Perlin's Noise supposed to be a summation of different noise frequencies and amplitudes?
Qestion 2 is what does ni, inci, incj, inck stand for? Why use ni instead of x,y coordinates? Why is ni incremented with
ni[0]+=inci;
inci = 1.0 / (Noise3DTexSize / frequency);
I see Hugo Elias created his Perlin2D with x,y coordinates, and so does PerlinNoise3D(...).
Thanks in advance :)
I now understand why and am going to answer my own question in hopes that it helps other people.
Perlin's Noise is actually a synthesis of gradient noises. In its production process, we must compute the dot product of a vector pointing from one of the corners flooring the input point to the input point itself with the random-generated gradient vector.
Now if the input point were a whole number, such as the xyz coordinates of a texture you want to create, the dot product would always return 0, which would give you a flat noise. So instead, we use inci, incj, inck as an alternative index. Yep, just an index, nothing else.
Now returning to question 1, there are two methods to implement Perlin's Noise:
1.Calculate the noise values separately and store them in the RGBA slots in the texture
2.Synthesize the noises up before-hand and store them in one of the RGBA slots in the texture
noise3(ni) is the actual implementation of method 1, while PerlinNoise3D(...) suggests the latter.
In my personal opinion, method 1 is much better because you have much more flexibility over how you use each octave in your shaders.
My guess on the reason for using noise3(ni) in make3DNoiseTexture() instead if PerlinNoise3D(...) is that when you use that noise texture in your shader you want to be able to replicate and modify the functionality of PerlinNoise3D(...) directly in the shader.
My guess for the reasoning behind ni, inci, incj, inck is that using x,y,z of the volume directly don't give a good result so by scaling the the noise with the frequency instead it is possible to adjust the resolution of the noise independently from the volume size.

How does a blur gauss algorithm look like? Are there examples of implementation?

I have a bitmap image context and want to let this appear blurry. So best thing I can think of is a gauss algorithm, but I have no big idea about how this kind of gauss blur algorithms look like? Do you know good tutorials or examples on this? The language does not matter so much, if it's done all by hand without using language-specific API too much. I.e. in cocoa the lucky guys don't need to think about it, they just use a Imagefilter that's already there. But I don't have something like this in cocoa-touch (objective-c, iPhone OS).
This is actually quite simple. You have a filter pattern (also known as filter kernel) - a (small) rectangular array with coefficients - and just calculate the convolution of the image and the pattern.
for y = 1 to ImageHeight
for x = 1 to ImageWidth
newValue = 0
for j = 1 to PatternHeight
for i = 1 to PatternWidth
newValue += OldImage[x-PatternWidth/2+i,y-PatternHeight/2+j] * Pattern[i,j]
NewImage[x,y] = newValue
The pattern is just a Gauss curve in two dimensions or any other filter pattern you like. You have to take care at the edges of the image because the filter pattern will be partialy outside of the image. You can just assume that this pixels are balck, or use a mirrored version of the image, or what ever seems reasonable.
As a final note, there are faster ways to calculate a convolution using Fourier transforms but this simple version should be sufficent for a first test.
The Wikipedia article has a sample matrix in addition to some standard information on the subject.
Best place for image processing is THIS. You can get matlab codes there.
And this Wolfram demo should clear any doubts about doing it by hand.
And if you don't want to learn too many things learn PIL(Python Imaging Library).
"Here" is exactly what you need.
Code copied from above link:
import ImageFilter
def filterBlur(im):
im1 = im.filter(ImageFilter.BLUR)
im1.save("BLUR" + ext)
filterBlur(im1)