B-Spline transformation Vs Polynomial transformation in image registration - polynomials

What is the difference between BSpline transformation based image registration and Polynomial transformation based image registration?

PSPLINE is an expansion. It takes a variable as input and produces more than one variable as output. The difference between PSPLINE and BSPLINE is that PSPLINE produces a piecewise polynomial, whereas BSPLINE produces a B-spline.
The is a misuse of the word polynomial for transformation.
B-splines are localized piecewise polynomial functions with conveniently adjustable (non)-continuous derivatives

Related

Calculating depth image from surface normal images, confused about integration/summation algorithm

I'm going through Forsyth/Ponce and working through their reading on recreating a depth map from surface normals. The idea is that you do summation of partial derivatives across a path to calculate the shape, as explained in the textbook:
In the integration section, it goes over one "path" of calculating the depth map using the surface normal partial derivative matrices. It's been a while since I've done multivariate calculus, so my confusion is on why this specific integration represents one path, and how I can generate other paths with these same partial derivative matrices? Based on the line integral, the surface doesn't depend on the choice of the curve; I'm just not sure what this means in terms of the discrete summation using the partial derivatives.
Any help would be appreciated!

OpenCV Projection Matrix Choice

I am currently facing a problem, to depict you what my programm does and should do, here is the copy/paste of the beginning of a previous post I've made.
This program lies on the classic "structure from motion" method.
The basic idea is to take a pair of images, detect their keypoints and compute the descriptors of those keypoints. Then, the keypoints matching is done, with a certain number of tests to insure the result is good. That part works perfectly.
Once this is done, the following computations are performed : fundamental matrix, essential matrix, SVD decomposition of the essential matrix, camera projection matrices computation and finally, triangulation.
The result for a pair of images is a set of 3D coordinates, giving us points to be drawn in a 3D viewer. This works perfectly, for a pair.
However, I have to perform a step manually, and this is not acceptable if I want my program to efficiently work with more than two images.
Indeed, I compute my projection matrices according the classic method, as follows, at paragraph "Determining R and t from E" : https://en.wikipedia.org/wiki/Essential_matrix
I have then 4 possible solutions for my projection matrix.
I think I have understood the geometrical point of view of the problem, portrayded in this Hartley and Zisserman paper extract (chapters 9.6.3 and 9.7.1) : http://www.robots.ox.ac.uk/~vgg/hzbook/hzbook2/HZepipolar.pdf
Nonetheless, my question is : Given the four possible projection matrices computed and the 3D points computed by the OpenCV function triangulatePoints() (for each projection matrix), how can I elect the "true" projection matrix, automatically ? (without having to draw 4 times my points in my 3D viewer, in order to see if they are consistent)
Thanks for reading.

Smoothing motion parameters

I have been working on video stabilization for quite a few weeks now. The algorithm I'm following basically involves 3 steps :-
1. FAST feature detection and Matching
2. Calculating affine transformation (scale + rotation + translation x + translation y ) from matched keypoints
3. Smooth motion parameters using cubic spline or b-spline.
I have been able to calculate affine transform. But I am stuck at smoothing motion parameters. I have been unable to evaluate spline function to smooth the three parameters.
Here is a graph for smoothed data points
Any suggestion or help as to how can I code to get a desired result as shown in the graph?
Here is the code that calculate the points on the curve
B-spline Curves
But now the code will use all control points as transform parameters to formulate.
I think i will run in post-processing (not real time).
Did you run B spline smoothing in real time?

How to get the Gaussian matrix with variance σs in opencv?

I'm trying to design a line detector in opencv, and to do that, I need to get the Gaussian matrix with variance σs.
The final formula should be
H=Gσs∗(Gσd')T, and H is the detector that I'm going to create, but I have no idea how am I supposed to create the matrix with the variance and furthermore calculate H finally.
Update
This is the full formula.where “T” is the transpose operation.Gσd' is the first-order derivative of a 1-D Gaussian function Gσd with varianceσd in this direction
****Update****
These are the two formulas that I want, I need H for further use so please tell me how to generate the matrix. thx!
As a Gaussian filter is quite common, OpenCV has a built-in operation for it: GaussianBlur.
When you use that function you can set the ksize argument to 0/0 to automatically compute the pixel size of the kernel from the given sigmas.
A Gaussian 2D filter kernel is separable. That means you can first apply a 1D filter along the x axis and then a 1D filter along the y axis. That is the reason for having two 1D filters in the equation above. It is much faster to do two 1D filter operations instead of one 2D.

displacement between two images using opencv surf

I am working on image processing with OPENCV.
I want to find the x,y and the rotational displacement between two images in OPENCV.
I have found the features of the images using SURF and the features have been matched.
Now i want to find the displacement between the images. How do I do that? Can RANSAC be useful here?
regards,
shiksha
Rotation and two translations are three unknowns so your min number of matches is two (since each match delivers two equations or constraints). Indeed imagine a line segment between two points in one image and the corresponding (matched) line segment in another image. The difference between segments' orientations gives you a rotation angle. After you rotated just use any of the matched points to find translation. Thus this is 3DOF problem that requires two points. It is called Euclidean transformation or rigid body transformation or orthogonal Procrustes.
Using Homography (that is 8DOF problem ) that has no close form solution and relies on non-linear optimization is a bad idea. It is slow (in RANSAC case) and inaccurate since it adds 5 extra DOF. RANSAC is only needed if you have outliers. In the case of pure noise and overdetrmined system (more than 2 points) your optimal solution that minimizes the sum of squares of geometric distance between matched points is given in a close form by:
Problem statement: min([R*P+t-Q]2), R-rotation, t-translation
Solution: R = VUT, t = R*Pmean-Qmean
where X=P-Pmean; Y=Q-Qmean and we take SVD to get X*YT=ULVT; all matrices have data points as columns. For a gentle intro into rigid transformations see this