I have two arrays with the same dimension, lets name it x and y.
When I plot them, plt.plot(x,y) the plot itself gives me back a continuous interpolation of my discrete data x and y.
How can I recover these interpolation from the plot?
Is there any other alternative to obtain more data in (x,y)?
pyplot.plot() connects points on the graph with a line. That corresponds to linear interpolation if the plot is not logarithmic.
Among ready made functions, look at numpy.interp().
For theory, refer to https://en.wikipedia.org/wiki/Linear_interpolation
Related
I need to obtain the 3D plot of the joint probability distribution of two random variables x and y. Whereas this plot can be easily obtained with Mathematica, I wasn't able to find any documentation in Python.
Can you help me out with that?
I'm currently using VTK to load several non-convex 3D objects (polyhedron as vtkPolyData) and I want to compute the minimum distances between pairs of these objects. For this I am using vtkSmartPointer<vtkDistancePolyDataFilter> with the two vtkPolyData as input.
My first question is: Am I assuming right, that this method computes the distance between each vertex-vertex-pair of both input objects (see http://www.vtk.org/Wiki/VTK/Examples/Cxx/PolyData/DistancePolyDataFilter)? I have read at several pages that one cannot compute the minimum distance between two 3D objects this way; is this right? If so, why wouldn't it work this way?
If it is possible to compute the minimum distance of the two objects with this example, then I have two further questions:
How can I determine the point at an input 3D object, where the distance is minimum to the second input object? In other words, how do I get the value from the distancePolyDataFilter-output with the minimum distance?
EDIT: I used another example to measure the distance between each vertex-vertex pair of the two vtkPolyData, so that I can now access the point of the first vtkPolyData, at which the distance to the second vtkPolyData is minimum:
double vtkImplicitPolyDataDistance::EvaluateFunction(double x[3])
But I don't know how to get the point of the second vtkPolyData (the corresponding point for the minimum distance).
Secondly, is there any common way to reduce the set of points in the two vtkPolyData, so that I don't have to compute/compare each vertex-vertex-pair? For each pair of 3D objects, for which I want to compute the minimum distance, I can roughly determine their relative position to wach other. For example I know that object two lies above object one in x-direction. But since the objects are non-convex I cannot say that the maximum x-value of object one is smaller than the minimum x-value of object two. I also know that my 3D objects do not intersect, so there is always a positive minimum distance between them (but again, as the objects are non-convex I cannot work with bounding boxes (or at least I cannot think of any way how to use them)).
I have read at several pages that one cannot compute the minimum distance between two 3D objects this way [between each vertex-vertex-pair of both input objects]; is this right? If so, why wouldn't it work this way?
I was wondering this same question and here is an example illustrating how two meshes can be closer than any of their vertex pairs are.
The red points are their closest points which are not vertices.
Sorry, I cannot speak to the VTK example.
I have some 3D Points that roughly, but clearly form a segment of a circle. I now have to determine the circle that fits best all the points. I think there has to be some sort of least squares best fit but I cant figure out how to start.
The points are sorted the way they would be situated on the circle. I also have an estimated curvature at each point.
I need the radius and the plane of the circle.
I have to work in c/c++ or use an extern script.
You could use a Principal Component Analysis (PCA) to map your coordinates from three dimensions down to two dimensions.
Compute the PCA and project your data onto the first to principal components. You can then use any 2D algorithm to find the centre of the circle and its radius. Once these have been found/fitted, you can project the centre back into 3D coordinates.
Since your data is noisy, there will still be some data in the third dimension you squeezed out, but bear in mind that the PCA chooses this dimension such as to minimize the amount of data lost, i.e. by maximizing the amount of data that is represented in the first two components, so you should be safe.
A good algorithm for such data fitting is RANSAC (Random sample consensus). You can find a good description in the link so this is just a short outline of the important parts:
In your special case the model would be the 3D circle. To build this up pick three random non-colinear points from your set, compute the hyperplane they are embedded in (cross product), project the random points to the plane and then apply the usual 2D circle fitting. With this you get the circle center, radius and the hyperplane equation. Now it's easy to check the support by each of the remaining points. The support may be expressed as the distance from the circle that consists of two parts: The orthogonal distance from the plane and the distance from the circle boundary inside the plane.
Edit:
The reason because i would prefer RANSAC over ordinary Least-Squares(LS) is its superior stability in the case of heavy outliers. The following image is showing an example comparision of LS vs. RANSAC. While the ideal model line is created by RANSAC the dashed line is created by LS.
The arguably easiest algorithm is called Least-Square Curve Fitting.
You may want to check the math,
or look at similar questions, such as polynomial least squares for image curve fitting
However I'd rather use a library for doing it.
I'm a student, and I've been tasked to optimize bilinear interpolation of images by invoking parallelism from CUDA.
The image is given as a 24-bit .bmp format. I already have a reader for the .bmp and have stored the pixels in an array.
Now I need to perform bilinear interpolation on the array. I do not understand the math behind it (even after going through the wiki article and other Google results). Because of this I'm unable to come up with an algorithm.
Is there anyone who can help me with a link to an existing bilinear interpolation algorithm on a 1-D array? Or perhaps link to an open source image processing library that utilizes bilinear and bicubic interpolation for scaling images?
The easiest way to understand bilinear interpolation is to understand linear interpolation in 1D.
This first figure should give you flashbacks to middle school math. Given some location a at which we want to know f(a), we take the neighboring "known" values and fit a line between them.
So we just used the old middle-school equations y=mx+b and y-y1=m(x-x1). Nothing fancy.
We basically carry over this concept to 2-D in order to get bilinear interpolation. We can attack the problem of finding f(a,b) for any a,b by doing three interpolations. Study the next figure carefully. Don't get intimidated by all the labels. It is actually pretty simple.
For a bilinear interpolation, we again using the neighboring points. Now there are four of them, since we are in 2D. The trick is to attack the problem one dimension at a time.
We project our (a,b) to the sides and first compute two (one dimensional!) interpolating lines.
f(a,yj) where yj is held constant
f(a,yj+1) where yj+1 is held constant.
Now there is just one last step. You take the two points you calculated, f(a,yj) and f(a,yj+1), and fit a line between them. That's the blue one going left to right in the diagram, passing through f(a,b). Interpolating along this last line gives you the final answer.
I'll leave the math for the 2-D case for you. It's not hard if you work from the diagram. And going through it yourself will help you really learn what's going on.
One last little note, it doesn't matter which sides you pick for the first two interpolations. You could have picked the top and bottom, and then done the third interpolation line between those two instead. The answer would have been the same.
When you enlarge an image by scaling the sides by an integral factor, you may treat the result as the original image with extra pixels inserted between the original pixels.
See the pictures in IMAGE RESIZE EXAMPLE.
The f(x,y)=... formula in this article in Wikipedia gives you a method to compute the color f of an inserted pixel:
For every inserted pixel you combine the colors of the 4 original pixels (Q11, Q12, Q21, Q22) surrounding it. The combination depends on the distance between the inserted pixel and the surrounding original pixels, the closer it is to one of them, the closer their colors are:
The original pixels are shown as red. The inserted pixel is shown as green.
That's the idea.
If you scale the sides by a non-integral factor, the formulas still hold, but now you need to recalculate all pixel colors as you can't just take the original pixels and simply insert extra pixels between them.
Don't get hung up on the fact that 2D arrays in C are really 1D arrays. It's an implementation detail. Mathematically, you'll still need to think in terms of 2D arrays.
Think about linear interpolation on a 1D array. You know the value at 0, 1, 2, 3, ... Now suppose I ask you for the value at 1.4. You'd give me a weighted mix of the values at 1 and 2: (1 - 0.4)*A[1] + 0.4*A[2]. Simple, right?
Now you need to extend to 2D. No problem. 2D interpolation can be decomposed into two 1D interpolations, in the x-axis and then y-axis. Say you want (1.4, 2.8). Get the 1D interpolants between (1, 2)<->(2,2) and (1,3)<->(2,3). That's your x-axis step. Now 1D interpolate between them with the appropriate weights for y = 2.8.
This should be simple to make massively parallel. Just calculate each interpolated pixel separately. With shared memory access to the original image, you'll only be doing reads, so no synchronization issues.
I have N points in 3-dimensional space. I need to join them using a line. However, if I do that using a simple line, it is not smooth and looks ugly.
My current approach is to use a Bezier curve, using the DeCasteljau algorithm for 4 points, and running that for each group of 4 points in my data set. However, the problem with this is that since I run it on say points 1-4, 5-8, 9-12, etc., separately, the line is not smooth between 4-5, 8-9, etc.
I also looked for other approaches; specifically I found this article about Catmull-Rom splines, which seem even better suited for my purpose, because the curve passes through all control points, unlike the Bezier curve. So I almost started implementing that, but then, I saw on that site that the formula works "assuming uniform spacing of control points". That is not the case for my problem.
So, my question is, what approach should I use -- Bezier, Catmull-Rom, or something completely different? If Bezier, then how to fix the non-smoothness between 4-5, 8-9, etc.? If Catmull-Rom, why won't the formula work if points are not evenly spaced, and what do I need instead?
EDIT: I am now pretty sure I want the Catmull-Rom spline, as it passes every control point which is an advantage for my application. Therefore, the main question I would like answered is why won't the formula on the link I provided work for non-uniformly spaced control points?
Thanks.
A couple of solutions:
Use a B-spline. This is a generalization of Bezier curves (a Bezier curve is a B-spline with no internal knot points.)
Use a cubic spline. Cubic splines are particularly easy to calculate. A cubic spline is continuous in the zero, first, and second derivatives across the control points. The third derivative, the cubic term, suffers a discontinuity at the control points, but it is very hard to see those discontinuities.
One key difference between a B-spline and a cubic spline is that the cubic spline will pass through all of the control points, while a B-spline does not. One way to think about it: Those internal control points are just suggestions for a B-spline but are mandatory for a cubic spline.
A meaningful line (although not the simplest to evaluate) can be found via Gaussian Processes. You set (or infer) the lengthscale over which you wish the line to vary (i.e. the smoothness of the line) and then the GP line is the most probable line through the data given the lengthscale. You can add noise to the model if you don't mind the line not passing through the data points.
Its a nice interpolation method because you can also obtain the standard deviation of your line. The line becomes more uncertain when you don't have much data in the vacinity.
You can read about them in chapter 45 of David MacKay's Information Theory, Inference, and Learning Algorithms - which you can download from the author's website here.
one solution is the following page in wikipedia: http://en.wikipedia.org/wiki/Bézier_curve, check the generalized approach for N control points.